-
Notifications
You must be signed in to change notification settings - Fork 310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug in alpha optimizer in MTSAC #2303
Comments
Thanks for reporting this. I suppose the overall effect is that the alpha learning rate is essentially multiplied by the number of tasks being trained. This should be about as simple as just replacing that one line of code, so we should definitely fix this. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
There is a potential bug in how alpha optimizer is initialized in MTSAC. During init we have:
Since log alpha is a tensor, what is being passed to the optimizer is the same tensor multiple times. I don't think that is the intended behavior. In the
to()
function below, it is overridden with the correct initialization for the optimizer:PyTorch recognizes the parameters are duplicates:
But as github.com/pytorch/pytorch/issues/40967 details, the net effect is the tensor log_alpha gets updated
num_task
times at each step, since all copies belong to the same param_group.A quick test can show that:
@abhi-iyer
The text was updated successfully, but these errors were encountered: