Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Abonormal loss tendency when traning in smaller batch size #48

Open
mxcheeto opened this issue Apr 1, 2023 · 1 comment
Open

Abonormal loss tendency when traning in smaller batch size #48

mxcheeto opened this issue Apr 1, 2023 · 1 comment

Comments

@mxcheeto
Copy link

mxcheeto commented Apr 1, 2023

When reproducing the experiment under maptr_tiny_r50_24e.py using 2 batches (per GPU) x 4 GPUs (2080 Ti), the obtained loss results seem abnormal, especially the loss_dir group:
image
image

However, the reproduction using 2 batches (per GPU) x 8 GPUs (2080 Ti) went well in the overall procedure and I can see the desired loss tendency. Does anyone ever meet the same problem? What might be the cause?

@123dbl
Copy link

123dbl commented Aug 21, 2023

where can i set the batch size

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants