Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add warpSize to Device properties #128449

Closed
wants to merge 5 commits into from

Conversation

ramcherukuri
Copy link
Contributor

@ramcherukuri ramcherukuri commented Jun 11, 2024

Adding warp_size to CudaDeviceProperties.

import torch
prop = torch.cuda.get_device_properties(torch.cuda.current_device())
prop.warp_size
64

@jeffdaily @pruthvistony @jithunnair-amd @ROCmSupport

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang

@ramcherukuri ramcherukuri requested a review from eqy as a code owner June 11, 2024 21:00
Copy link

pytorch-bot bot commented Jun 11, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/128449

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (4 Unrelated Failures)

As of commit 1e68689 with merge base c12a4f2 (image):

FLAKY - The following jobs failed but were likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pruthvistony pruthvistony added ciflow/trunk Trigger trunk jobs on your pull request rocm This tag is for PRs from ROCm team ciflow/rocm ciflow/periodic Trigger jobs ran periodically on master (periodic.yml) on the PR ciflow/inductor labels Jun 13, 2024
@jataylo jataylo self-requested a review June 14, 2024 14:39
@jeffdaily
Copy link
Collaborator

@pytorchbot rebase

@pytorchmergebot
Copy link
Collaborator

@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Successfully rebased warp_size-dev-prop onto refs/remotes/origin/viable/strict, please pull locally before adding more changes (for example, via git checkout warp_size-dev-prop && git pull --rebase)

@jeffdaily jeffdaily requested review from huydhn and malfet June 19, 2024 16:28
@ramcherukuri
Copy link
Contributor Author

ramcherukuri commented Jun 25, 2024

@malfet @huydhn, Can you please help review/merge this PR. Thank you

@jithunnair-amd
Copy link
Collaborator

@malfet Can you please approve/merge this PR? It's blocking another PR #129663

@malfet
Copy link
Contributor

malfet commented Jun 27, 2024

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: This PR needs a release notes: label
If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "topic: not user facing"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Details for Dev Infra team Raised by workflow job

@jataylo jataylo added the release notes: rocm mandatorylabel label Jun 28, 2024
@jeffdaily
Copy link
Collaborator

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@malfet malfet added the topic: improvements topic category label Jun 28, 2024
@pytorchmergebot
Copy link
Collaborator

The merge job was canceled or timed out. This most often happen if two merge requests were issued for the same PR, or if merge job was waiting for more than 6 hours for tests to finish. In later case, please do not hesitate to reissue the merge command
For more information see pytorch-bot wiki.

@jataylo
Copy link
Collaborator

jataylo commented Jul 1, 2024

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pytorchmergebot pushed a commit to khushi-411/pytorch that referenced this pull request Jul 2, 2024
Adding warp_size to CudaDeviceProperties.

>>> import torch
>>> prop = torch.cuda.get_device_properties(torch.cuda.current_device())
>>> prop.warp_size
64
>>>

@jeffdaily @pruthvistony @jithunnair-amd @ROCmSupport

Co-authored-by: Jithun Nair <[email protected]>
Pull Request resolved: pytorch#128449
Approved by: https://github.com/eqy, https://github.com/jataylo, https://github.com/jithunnair-amd, https://github.com/malfet
pytorchmergebot pushed a commit that referenced this pull request Jul 19, 2024
…9663)

As of ROCm 6.1 [hipDeviceProp_t::regsPerMultiprocessor](https://rocm.docs.amd.com/projects/HIP/en/latest/doxygen/html/structhip_device_prop__t.html#a7390d5b180d63978c81aa971060270b4) is now available allowing us to enable this attribute on ROCm.
```
>>> torch.cuda.get_device_properties(0)
_CudaDeviceProperties(name='AMD Instinct MI250X/MI250', major=9, minor=0, gcnArchName='gfx90a:sramecc+:xnack-', total_memory=65520MB, multi_processor_count=104)
>>> torch.cuda.get_device_properties(0).regs_per_multiprocessor
65536
```

With https://github.com/triton-lang/triton/pull/3962we can extract n_regs and n_spells from a triton binary with AMD backend allowing us to enable inductor's dynamic_rblock_scaling on ROCm initially implemented in #115094

Leaving this in draft until following PRs have landed:
- #129361 to bump the triton commit pin
- #128449 to allow us to grab warp_size from device properties instead of hard coding 64 on ROCm.

Pull Request resolved: #129663
Approved by: https://github.com/jansel, https://github.com/shunting314
DiweiSun pushed a commit to DiweiSun/pytorch that referenced this pull request Jul 22, 2024
…orch#129663)

As of ROCm 6.1 [hipDeviceProp_t::regsPerMultiprocessor](https://rocm.docs.amd.com/projects/HIP/en/latest/doxygen/html/structhip_device_prop__t.html#a7390d5b180d63978c81aa971060270b4) is now available allowing us to enable this attribute on ROCm.
```
>>> torch.cuda.get_device_properties(0)
_CudaDeviceProperties(name='AMD Instinct MI250X/MI250', major=9, minor=0, gcnArchName='gfx90a:sramecc+:xnack-', total_memory=65520MB, multi_processor_count=104)
>>> torch.cuda.get_device_properties(0).regs_per_multiprocessor
65536
```

With https://github.com/triton-lang/triton/pull/3962we can extract n_regs and n_spells from a triton binary with AMD backend allowing us to enable inductor's dynamic_rblock_scaling on ROCm initially implemented in pytorch#115094

Leaving this in draft until following PRs have landed:
- pytorch#129361 to bump the triton commit pin
- pytorch#128449 to allow us to grab warp_size from device properties instead of hard coding 64 on ROCm.

Pull Request resolved: pytorch#129663
Approved by: https://github.com/jansel, https://github.com/shunting314
xuhancn pushed a commit to xuhancn/pytorch that referenced this pull request Jul 25, 2024
…orch#129663)

As of ROCm 6.1 [hipDeviceProp_t::regsPerMultiprocessor](https://rocm.docs.amd.com/projects/HIP/en/latest/doxygen/html/structhip_device_prop__t.html#a7390d5b180d63978c81aa971060270b4) is now available allowing us to enable this attribute on ROCm.
```
>>> torch.cuda.get_device_properties(0)
_CudaDeviceProperties(name='AMD Instinct MI250X/MI250', major=9, minor=0, gcnArchName='gfx90a:sramecc+:xnack-', total_memory=65520MB, multi_processor_count=104)
>>> torch.cuda.get_device_properties(0).regs_per_multiprocessor
65536
```

With https://github.com/triton-lang/triton/pull/3962we can extract n_regs and n_spells from a triton binary with AMD backend allowing us to enable inductor's dynamic_rblock_scaling on ROCm initially implemented in pytorch#115094

Leaving this in draft until following PRs have landed:
- pytorch#129361 to bump the triton commit pin
- pytorch#128449 to allow us to grab warp_size from device properties instead of hard coding 64 on ROCm.

Pull Request resolved: pytorch#129663
Approved by: https://github.com/jansel, https://github.com/shunting314
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/inductor ciflow/periodic Trigger jobs ran periodically on master (periodic.yml) on the PR ciflow/rocm ciflow/trunk Trigger trunk jobs on your pull request Merged module: inductor open source release notes: rocm mandatorylabel rocm This tag is for PRs from ROCm team topic: improvements topic category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

9 participants