You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When reproducing the experiment under maptr_tiny_r50_24e.py using 2 batches (per GPU) x 4 GPUs (2080 Ti), the obtained loss results seem abnormal, especially the loss_dir group:
However, the reproduction using 2 batches (per GPU) x 8 GPUs (2080 Ti) went well in the overall procedure and I can see the desired loss tendency. Does anyone ever meet the same problem? What might be the cause?
The text was updated successfully, but these errors were encountered:
When reproducing the experiment under maptr_tiny_r50_24e.py using 2 batches (per GPU) x 4 GPUs (2080 Ti), the obtained loss results seem abnormal, especially the loss_dir group:
However, the reproduction using 2 batches (per GPU) x 8 GPUs (2080 Ti) went well in the overall procedure and I can see the desired loss tendency. Does anyone ever meet the same problem? What might be the cause?
The text was updated successfully, but these errors were encountered: