Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BE] use relative backwards references in torch.optim._multi_tensor #129132

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

fbgheith
Copy link
Contributor

Summary:
This avoids some internal test failures caused by "infinite" recursion during import. Example: P1430048846

A bit of history:

  • PR [BE][Easy] export explicitly imported public submodules #127703 introduced a circular dependency
    torch/optim/__init__.py imports torch.optim._multi_tensor and
    torch.optim._multi_tensor/_init__.py imports torch.optim

    This seemed to work fine (green signals everywhere) but caused some internal test failures after it landed; an infinite recursion during import.

  • PR Remove circular import #128875 attempted to fix this by removing the import from torch/optim/__init__.py.

This seemed to work fine: green signals everywhere and the failing tests started passing but a smaller number of tests started failing; unable to import torch.optim._multi_tensor

  • This diff re-introduces the import but avoids the infinite recursion by using relative package paths

Differential Revision: D58815471

Copy link

pytorch-bot bot commented Jun 20, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/129132

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit c86c6f5 with merge base 7178b4e (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D58815471

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D58815471

…ytorch#129132)

Summary:
Pull Request resolved: pytorch#129132

This avoids some internal test failures caused by "infinite" recursion during import. Example: P1430048846

A bit of history:

- PR pytorch#127703 introduced a circular dependency
`torch/optim/__init__.py` imports `torch.optim._multi_tensor` and
`torch.optim._multi_tensor/_init__.py` imports `torch.optim`

  This seemed to work fine (green signals everywhere) but caused some internal test failures after it landed; an infinite recursion during import.

- PR pytorch#128875 attempted to fix this by removing the import from `torch/optim/__init__.py`.

This seemed to work fine: green signals everywhere and the failing tests started passing but a smaller number of tests started failing; unable to import `torch.optim._multi_tensor`

- This diff re-introduces the import but avoids the infinite recursion by using relative package paths

Test Plan:
CI signals

Previously failing tests are passing:

* https://www.internalfb.com/intern/testinfra/testrun/4785074840480058
*  https://www.internalfb.com/intern/testinfra/testrun/281475349103659
* https://www.internalfb.com/intern/testinfra/testrun/11540474083575936

Differential Revision: D58815471
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D58815471

Comment on lines +44 to +54
del _Adam
del _AdamW
del _NAdam
del _SGD
del _RAdam
del _RMSprop
del _Rprop
del _ASGD
del _Adamax
del _Adadelta
del _Adagrad
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why this is neded?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not strictly required. Just didn't want to corrupt the namespace.

Comment on lines +40 to +42


import torch.optim._multi_tensor
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was removed in #128875

What is the reason for adding it back?

Copy link
Contributor Author

@fbgheith fbgheith Jun 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing it caused some test failures. The specific order in which imports were done caused some tests to not be able to resolve "torch.optim._multi_tensor".

@albanD albanD removed their request for review June 20, 2024 20:20
Comment on lines +9 to +21
from .. import (
Adadelta as _Adadelta,
Adagrad as _Adagrad,
Adam as _Adam,
Adamax as _Adamax,
AdamW as _AdamW,
ASGD as _ASGD,
NAdam as _NAdam,
RAdam as _RAdam,
RMSprop as _RMSprop,
Rprop as _Rprop,
SGD as _SGD,
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
from .. import (
Adadelta as _Adadelta,
Adagrad as _Adagrad,
Adam as _Adam,
Adamax as _Adamax,
AdamW as _AdamW,
ASGD as _ASGD,
NAdam as _NAdam,
RAdam as _RAdam,
RMSprop as _RMSprop,
Rprop as _Rprop,
SGD as _SGD,
)
import torch

Use torch.optim.XXX below.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Circular imports with absolute paths are causing failures in our environment (infinite recursion). Using relative paths fixes that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants