-
Notifications
You must be signed in to change notification settings - Fork 21.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[aotinductor] Only autotune at compile time when enabled via config #129335
Conversation
This pull request was exported from Phabricator. Differential Revision: D58931057 |
torch/_inductor/codegen/wrapper.py
Outdated
@@ -1488,7 +1488,7 @@ def wrap_arg(arg): | |||
elif isinstance(arg, (int, float, bool, SymbolicCallArg)): | |||
return str(arg) | |||
else: | |||
return pexpr(V.graph.sizevars.simplify(arg)) | |||
return self.expr_printer(V.graph.sizevars.simplify(arg)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this fix the internal model failure you pointed out in D58931057?
@@ -1300,7 +1300,7 @@ def compile_fx( | |||
with config.patch( | |||
{ | |||
"cpp_wrapper": False, | |||
"triton.autotune_at_compile_time": True, | |||
"triton.autotune_at_compile_time": config.triton.autotune_at_compile_time, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you also update here, https://github.com/pytorch/pytorch/blob/main/torch/_inductor/config.py#L700, to make the default value to 1? I wanted to keep the always on unless it breaks anything beyond the current test coverage. Not sure if your change to wrapper.py fixes the underlying problem. If yes, you can make the default value of TORCHINDUCTOR_TRITON_AUTOTUNE_AT_COMPILE_TIME
always to 1; otherwise, you can make the default of TORCHINDUCTOR_TRITON_AUTOTUNE_AT_COMPILE_TIME
different between fbcode and OSS.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Making fbcode and OSS default difference since the issue still exists
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I was wrong. Changing _inductor/config.py will change the behavior of JIT Inductor. We can only change it here.
2b0a266
to
6b1830c
Compare
This pull request was exported from Phabricator. Differential Revision: D58931057 |
6b1830c
to
44e4d62
Compare
This pull request was exported from Phabricator. Differential Revision: D58931057 |
44e4d62
to
9da53f9
Compare
This pull request was exported from Phabricator. Differential Revision: D58931057 |
…ytorch#129335) Summary: Pull Request resolved: pytorch#129335 --- Test Plan: ci Reviewed By: jingsh, desertfire Differential Revision: D58931057
This pull request was exported from Phabricator. Differential Revision: D58931057 |
9da53f9
to
8426eea
Compare
…ytorch#129335) Summary: Pull Request resolved: pytorch#129335 --- Test Plan: ci Reviewed By: jingsh, desertfire, houseroad Differential Revision: D58931057
This pull request was exported from Phabricator. Differential Revision: D58931057 |
8426eea
to
defde8d
Compare
@pytorchbot merge (Initiating merge automatically since Phabricator Diff has merged) |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 3 mandatory check(s) failed. The first few are:
Dig deeper by viewing the failures on hud |
Summary: ---
Test Plan: ci
Differential Revision: D58931057
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @amjames @desertfire @chauhang
@diff-train-skip-merge