Including AdaBound in the list of Optimizers. #46809
Labels
enhancement
Not as big of a feature, but technically not a bug. Should be easy to fix
module: optimizer
Related to torch.optim
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
🚀 Feature
Hello all,
I would like to add Adabound to the list of existing optimizers in the torch.optim module.
Here is the link to the paper - https://openreview.net/pdf?id=Bkg3g2R9FX
Motivation
In most of my models, I realized Adam started off good but never converged or got stuck even after scheduling the learning rate, and if I started with SGD, the loss turned into NaNs soon. Out of curiosity, I changed the optimizer from Adam to SGD in the middle of the training and noticed better results, and upon researching more I came across the paper mentioned above. Therefore I believe it would be good to have the optimizer listed in PyTorch by default
Pitch
I have already implemented the optimizer, once I get the green flag from your side, I will send the pull request.
cc @vincentqb
The text was updated successfully, but these errors were encountered: