Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sparse annotations for training #51

Open
SebastienTs opened this issue Aug 17, 2023 · 2 comments
Open

Sparse annotations for training #51

SebastienTs opened this issue Aug 17, 2023 · 2 comments

Comments

@SebastienTs
Copy link

SebastienTs commented Aug 17, 2023

Is there currently a way to use sparse annotations for training, i.e. for pixel classification by ignoring all the pixels set to 0 in the target image during loss computation and only considering pixels set to a value >0? If that is not possible, what would be the minimal modification to the code to achieve this (at least for some, or ideally for all existing losses)?

@mdraw
Copy link
Member

mdraw commented Aug 17, 2023

Hello @SebastienTs, yes this is possible with the regular torch.nn.CrossEntropyLoss by setting the ignore_index parameter to the label ID that you want to ignore. Since 0 is often reserved for a background class I usually use other ID values. It just has to be the ID that is used in the label files; no changes to the model outputs are needed.

The DiceLoss in elektronn3 doesn't support an ignore_index option but you can use the weight parameter to set the channel weight of the class that is to be ignored to 0.

@SebastienTs
Copy link
Author

SebastienTs commented Aug 18, 2023 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants