Skip to content
This repository has been archived by the owner on Mar 21, 2024. It is now read-only.

Adding Accuracy file ?? #550

Closed
tanujdhiman opened this issue Aug 3, 2021 · 8 comments
Closed

Adding Accuracy file ?? #550

tanujdhiman opened this issue Aug 3, 2021 · 8 comments

Comments

@tanujdhiman
Copy link

tanujdhiman commented Aug 3, 2021

Hello everybody !!

In this repository, I found all files regarding model making to inference like development to deployment process. But no cross validation of model like Accuracy, F1 Score, Confusion matrix etc. of model.

If it is in file please let me know if there is no file then Can I add this file !!

Thanks

AB#4334

@ant0nsc
Copy link
Contributor

ant0nsc commented Aug 4, 2021

Hi - I'm not sure I understand the question. Can you clarify, maybe add an example of what it is that you are trying to achieve?

@tanujdhiman
Copy link
Author

Thanks for replying @ant0nsc
Basically this inner eye deep learning is a tool box for training medical images using pytorch.

As any particular model in Deep learning has following phases ::

  • Development
  • Training
  • Testing
  • Evaluation
  • Deployment

In this repository, training and testing is here but evaluation is missing. Like we can evaluate our model example how much accuracy it has or how much F1 score etc.

What I want to add is ::

A evaluation file which has following functions ::

  • Accuracy
  • F1 Score
  • Contusion Matrix

Through these functions we can evaluate our trained model.
Hope you got my points!!!
Thanks

@ant0nsc
Copy link
Contributor

ant0nsc commented Aug 5, 2021

In your first post, you wrote about cross validation, but not in your second. Can you clarify how cross validation comes into play here?
Also, please clarify what you mean by "evaluation". We evaluate the final trained model on the test set. By evaluation, you mean running another dataset through the model, that was initially not available?

@tanujdhiman
Copy link
Author

Yes. Evaluation ::
Suppose that we have trained dataset on Medical dataset - 1 then Evaluation ::

  • Test on Medical dataset - 2
  • Checking Accuracy of the model
  • Checking F1 Score

Based on these we can evaluate how good our model is.

@Shruthi42
Copy link
Contributor

Hi @tanujdhiman, there's documentation here on how to use model checkpoints to run evaluation. If you are using AzureML, you can run evaluation on a model registered by a training run. You can also run evaluation on checkpoints by specifying the path to the checkpoint on your local system or a URL to download the checkpoint from.

If you are using one of the segmentation or classification configs, the config will automatically partition the input dataset into training, validation and test datasets. To avoid this and run inference on the entire input dataset you will need to add the option --restrict_subjects=0,0,+ when running the script. This will move all the input data to the test dataset.

@tanujdhiman
Copy link
Author

Thanks @Shruthi42
But what about checking model accuracy, confusion matrix, and components ?

@Shruthi42
Copy link
Contributor

It depends on the type of model you are running. The classification and segmentation models write out reports with metrics and other useful information after the inference step finishes. For custom pytorch lightning models, you can define your own report with metrics, etc. by overriding the create_report method in the config. There is a very simple example of this in the HelloContainer config.

You can find generated reports in the reports subfolder in the output directory of a run.

@tanujdhiman
Copy link
Author

Okay Got it.
Thanks

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants