-
Notifications
You must be signed in to change notification settings - Fork 21.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BE] use relative backwards references in torch.optim._multi_tensor #129132
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/129132
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit c86c6f5 with merge base 7178b4e (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D58815471 |
This pull request was exported from Phabricator. Differential Revision: D58815471 |
56ef5b4
to
cf9717a
Compare
…ytorch#129132) Summary: Pull Request resolved: pytorch#129132 This avoids some internal test failures caused by "infinite" recursion during import. Example: P1430048846 A bit of history: - PR pytorch#127703 introduced a circular dependency `torch/optim/__init__.py` imports `torch.optim._multi_tensor` and `torch.optim._multi_tensor/_init__.py` imports `torch.optim` This seemed to work fine (green signals everywhere) but caused some internal test failures after it landed; an infinite recursion during import. - PR pytorch#128875 attempted to fix this by removing the import from `torch/optim/__init__.py`. This seemed to work fine: green signals everywhere and the failing tests started passing but a smaller number of tests started failing; unable to import `torch.optim._multi_tensor` - This diff re-introduces the import but avoids the infinite recursion by using relative package paths Test Plan: CI signals Previously failing tests are passing: * https://www.internalfb.com/intern/testinfra/testrun/4785074840480058 * https://www.internalfb.com/intern/testinfra/testrun/281475349103659 * https://www.internalfb.com/intern/testinfra/testrun/11540474083575936 Differential Revision: D58815471
This pull request was exported from Phabricator. Differential Revision: D58815471 |
cf9717a
to
c86c6f5
Compare
del _Adam | ||
del _AdamW | ||
del _NAdam | ||
del _SGD | ||
del _RAdam | ||
del _RMSprop | ||
del _Rprop | ||
del _ASGD | ||
del _Adamax | ||
del _Adadelta | ||
del _Adagrad |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why this is neded?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not strictly required. Just didn't want to corrupt the namespace.
|
||
|
||
import torch.optim._multi_tensor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was removed in #128875
What is the reason for adding it back?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removing it caused some test failures. The specific order in which imports were done caused some tests to not be able to resolve "torch.optim._multi_tensor".
from .. import ( | ||
Adadelta as _Adadelta, | ||
Adagrad as _Adagrad, | ||
Adam as _Adam, | ||
Adamax as _Adamax, | ||
AdamW as _AdamW, | ||
ASGD as _ASGD, | ||
NAdam as _NAdam, | ||
RAdam as _RAdam, | ||
RMSprop as _RMSprop, | ||
Rprop as _Rprop, | ||
SGD as _SGD, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from .. import ( | |
Adadelta as _Adadelta, | |
Adagrad as _Adagrad, | |
Adam as _Adam, | |
Adamax as _Adamax, | |
AdamW as _AdamW, | |
ASGD as _ASGD, | |
NAdam as _NAdam, | |
RAdam as _RAdam, | |
RMSprop as _RMSprop, | |
Rprop as _Rprop, | |
SGD as _SGD, | |
) | |
import torch |
Use torch.optim.XXX
below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Circular imports with absolute paths are causing failures in our environment (infinite recursion). Using relative paths fixes that.
Summary:
This avoids some internal test failures caused by "infinite" recursion during import. Example: P1430048846
A bit of history:
PR [BE][Easy] export explicitly imported public submodules #127703 introduced a circular dependency
torch/optim/__init__.py
importstorch.optim._multi_tensor
andtorch.optim._multi_tensor/_init__.py
importstorch.optim
This seemed to work fine (green signals everywhere) but caused some internal test failures after it landed; an infinite recursion during import.
PR Remove circular import #128875 attempted to fix this by removing the import from
torch/optim/__init__.py
.This seemed to work fine: green signals everywhere and the failing tests started passing but a smaller number of tests started failing; unable to import
torch.optim._multi_tensor
Differential Revision: D58815471