-
Notifications
You must be signed in to change notification settings - Fork 22k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[reland][ROCm] TunableOp for gemm_and_bias #128919
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/128919
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 1 Unrelated FailureAs of commit 7a44c2f with merge base 1491a61 (): NEW FAILURE - The following job has failed:
FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@xw285cornell Can you please review and approve if this PR looks good? |
Hi @xw285cornell - appreciate your help to get the PR reviewed. |
@malfet I think @xw285cornell is OOO. Can you please approve and merge this PR? |
@malfet re-ping |
@jeffdaily/@naromero77amd: I see a unit test was added for this PR: |
torch.cuda.tunable.enable(True) | ||
ordinal = torch.cuda.current_device() | ||
filename = f"tunableop_results{ordinal}.csv" | ||
torch.cuda.tunable.set_filename(filename) | ||
iterations = torch.cuda.tunable.get_max_tuning_iterations() | ||
torch.cuda.tunable.set_max_tuning_iterations(10) | ||
self._test_addmm_impl(torch._addmm_activation, "relu", device, dtype) | ||
# clean up, remove any file that was generated | ||
try: | ||
import os | ||
os.remove(filename) | ||
except FileNotFoundError: | ||
pass | ||
# reset back to prior settings | ||
torch.cuda.tunable.set_max_tuning_iterations(iterations) | ||
torch.cuda.tunable.enable(False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit, please use try: finally:
to avoid altering global state if test fails or gets interrupted
torch.cuda.tunable.enable(True) | |
ordinal = torch.cuda.current_device() | |
filename = f"tunableop_results{ordinal}.csv" | |
torch.cuda.tunable.set_filename(filename) | |
iterations = torch.cuda.tunable.get_max_tuning_iterations() | |
torch.cuda.tunable.set_max_tuning_iterations(10) | |
self._test_addmm_impl(torch._addmm_activation, "relu", device, dtype) | |
# clean up, remove any file that was generated | |
try: | |
import os | |
os.remove(filename) | |
except FileNotFoundError: | |
pass | |
# reset back to prior settings | |
torch.cuda.tunable.set_max_tuning_iterations(iterations) | |
torch.cuda.tunable.enable(False) | |
iterations = torch.cuda.tunable.get_max_tuning_iterations() | |
try: | |
torch.cuda.tunable.enable(True) | |
ordinal = torch.cuda.current_device() | |
filename = f"tunableop_results{ordinal}.csv" | |
torch.cuda.tunable.set_filename(filename) | |
torch.cuda.tunable.set_max_tuning_iterations(10) | |
self._test_addmm_impl(torch._addmm_activation, "relu", device, dtype) | |
finally: | |
# clean up, remove any file that was generated | |
try: | |
import os | |
os.remove(filename) | |
except FileNotFoundError: | |
pass | |
# reset back to prior settings | |
torch.cuda.tunable.set_max_tuning_iterations(iterations) | |
torch.cuda.tunable.enable(False) |
return c10::str(transa, transb, "_", m, "_", n, "_", k); | ||
} | ||
|
||
size_t GetSize(bool duplicate_inputs) const { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit, I though we are using camelCase
for methods and CapitalizedCamelCase
for class names
@pytorchbot merge -i |
Merge failedReason: 1 jobs have failed, first few of them are: trunk / win-vs2019-cpu-py3 / test (default, 3, 3, windows.4xlarge.nonephemeral) Details for Dev Infra teamRaised by workflow job |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 jobs have failed, first few of them are: trunk / macos-py3-arm64 / test (default, 3, 3, macos-m1-stable) Details for Dev Infra teamRaised by workflow job |
@pytorchbot merge -f "unrelated macos cpu job failed, all other CI is known flaky or passing" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Reland of #128143 but added
alpha
andbias
initialization tolaunchTunableGemmAndBias
Thus far TunableOp was implemented for gemm, bgemm, and scaled_mm. gemm_and_bias was notably missing. This PR closes that gap.
cc @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang