Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loss functions #42

Closed
PeterMcGor opened this issue Jan 19, 2021 · 4 comments
Closed

Loss functions #42

PeterMcGor opened this issue Jan 19, 2021 · 4 comments

Comments

@PeterMcGor
Copy link

Hi all;

I have been reading the paper and I realised that the loss functions are not explicitly shown. Are you using the original loss functions described as in it architecture references?

Best!

@JoHof
Copy link
Owner

JoHof commented Jan 19, 2021

Hi Peter,

thanks for the comment. You are right we should have stated the loss used. Sorry about that. We used cross entropy

@PeterMcGor
Copy link
Author

Thanks for the quick answer!

I just load your unet trained model to try TL and see that the last activation layer is LogSoftMax. Could be NLLLoss more appropriate?

Cheers!

@JoHof
Copy link
Owner

JoHof commented Jan 19, 2021

You can either remove the logsoftmax and use cross entropy or keep it and use nllloss. I guess it won't make much difference. I think I added the logsoftmax for this inference code only and used cross entropy for training

@PeterMcGor
Copy link
Author

Sure! Just curious about stability and repeatability
Cheers!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants