Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add XLA support to moco benchmark. #2292

Closed
wants to merge 2 commits into from

Conversation

ysiraichi
Copy link
Contributor

This PR tweaks moco benchmark, so that it will also run on XLA devices. Previously, moco hardcoded the CUDA device in two ways:

  • Initializing the ProcessGroup with nccl backend, only
  • Moving intermediate tensors to cuda, explicitly

In order to add XLA support, this PR:

  • Also checks for xla* devices and, if detected, initializes the ProcessGroup with xla backend
  • Moves intermediate tensors to the appropriate devices

cc @lezcano

@ysiraichi
Copy link
Contributor Author

The CI failures don't seem related to this PR.
@xuzhao9 I think this PR is ready for review. Could you do it whenever you have some time?

Copy link

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good to me.

FYI @aaronenyeshi and @JackCaoG

)
except RuntimeError:
pass # already initialized?
elif device.startswith("xla"):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit for consistency device == "xla" ?

@facebook-github-bot
Copy link
Contributor

@xuzhao9 has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@xuzhao9 merged this pull request in 612b3c8.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants