-
Notifications
You must be signed in to change notification settings - Fork 21.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Counter intuitive behavior of nn.CrossEntropy/nn.NLLLoss with weights and issue with gradient accumulation #72047
Comments
Hi, Thanks for the suggestion. Note that in the case that you share, even when weights are not involved, if you get a partial batch from your dataloader, then the final loss will also be wrong. So when doing multiple batch accumulation for a single backward, I would recommend you use the sum reduction and do the division yourself at the end. |
Can you add an Yes the weighted average is is documented clearly. It's still a counter intuitive choice I suspect many people overlook, but I take your point about BC/defaulting to weighted average. You are right about partial batches -- and perhaps worse, number batches not divisible by the number of grad accumulation steps -- the code above was just for exposition purposes. The user would have to write some additional code to manually compute the divisor in either sum/mean cases. I point out the gradient accumulation issue because I suspect many people using gradient accumulation are using the default (weighted) mean reduction, which would be very clunky to address even with additional code. |
Is this a duplicate of #61309? |
Ah thanks for pointing that out. One of the reasons I posted this issue is that the weighted mean makes implemented gradient accumulation properly very difficult for weighted losses (its a bit annoying but very doable to implement gradient accumulation with unweighted mean reduction). |
Closing this issue for now as a duplicate but the additional context is useful! Let's continue discussion on #61309 to get this resolved. |
🚀 The feature, motivation and pitch
The behavior of mean reduction in nn.CrossEntropy and nn.NLLLoss is counter intuitive when there are class weights as discussed in #9882. The current behavior performs a weighted average instead of an unweighted average, which is probably what people expect.
This counter intuitive behavior also causes an issue when doing gradient accumulation. In particular, when you adjust the loss function to account for gradient accumulation (i.e. to make the divisor
batch_size
xn_grad_accum_steps
instead of justbatch_size
) you no longer have the exact gradients (i.e. the gradients you would have had if your batch size wasbatch_size
xn_grad_accum_steps
).You can of course address this issue if you use
reduction=sum
and manually averaging the loss, but this is clunky and probably frequently overlooked.Possible solution
A straightforward solution and the most intuitive -- at least to me -- would be
reduction='mean'
perform an unweighted averagereduction='weighted_mean
' for weighted averages (current behavior ofmean
)mean
case, which is probably what users expect to happenAlternatives
No response
Additional context
No response
cc @albanD @mruberry @jbschlosser @walterddr @kshitij12345
The text was updated successfully, but these errors were encountered: