Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support to AMD ROCm #1414

Open
2 of 3 tasks
FindHao opened this issue Feb 15, 2023 · 5 comments
Open
2 of 3 tasks

Add support to AMD ROCm #1414

FindHao opened this issue Feb 15, 2023 · 5 comments

Comments

@FindHao
Copy link
Member

FindHao commented Feb 15, 2023

PyTorch could be easily installed with AMD ROCm. But torchbench has some limitations on ROCm envs.

  • model installation
  • model execution
  • GPU memory measurement
@FindHao FindHao self-assigned this Feb 15, 2023
@FindHao
Copy link
Member Author

FindHao commented Feb 17, 2023

Models detectron2_* and opacus_cifar10 have installation errors. If we only want to measure execution time, torchbench works well on other models. Verified on MI210 and ROCm 5.2.0.

@FindHao
Copy link
Member Author

FindHao commented Jul 6, 2023

Models detectron2_* and opacus_cifar10 have installation errors. If we only want to measure execution time, torchbench works well on other models. Verified on MI210 and ROCm 5.2.0.

For those models' installation, need to make sure rocm has its llvm installed.

@FindHao
Copy link
Member Author

FindHao commented Jul 7, 2023

#1752 adds a userbenchmark for testing on ROCm.

@jinsong-mao
Copy link

how about running test_bency.py on rocm+amd?

@FindHao
Copy link
Member Author

FindHao commented Nov 16, 2023

how about running test_bency.py on rocm+amd?

Are you asking about the current support status of rocm?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants