-
Notifications
You must be signed in to change notification settings - Fork 21.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Dynamic Shapes] fixed dynamic shape inference #128807
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/128807
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (3 Unrelated Failures)As of commit 65099e1 with merge base 8a2fed7 ( BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
UNSTABLE - The following job failed but was likely due to flakiness present on trunk and has been marked as unstable:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: 1 jobs have failed, first few of them are: inductor / linux-jammy-cpu-py3.8-gcc11-inductor / test (dynamic_cpu_inductor_torchbench, 1, 2, linux.12xlarge) Details for Dev Infra teamRaised by workflow job |
Here's the information related to the failure, inductor / linux-jammy-cpu-py3.8-gcc11-inductor / test (dynamic_cpu_inductor_torchbench, 1, 2, linux.12xlarge). This error comes from DALLE2_pytorch unable to be loaded on CPU, doesn't look like related to my code change.
|
DALLE2_pytorch doesn't look related to the code change, its error is NotImplementedError: DALL-E 2 Not Supported on CPU.
|
Just checked export/test_serdes.py::SerDesExportTestExport::test_derived_dim_1_2_serde, the following codes in export/test_export.py give some flexibility on input shape. But my code change ruins this flexibility, need to think about how to keep this flexibility. Luckily, it's Juneteenth tmr, I have some free time to investigate. |
After investigation, the change doesn't deter exporting a graph module in test_derived_dim_1_2 in test_export. But it only allows the usage of a tensor with dim 2 and blocks the usage of tensor with any other dim figure. This scenario is not the intention because the change intends to block graph compliling instead of its usage. |
Before this code change, the graph's first placeholder node represents a fake tensor, |
_update_var_to_range() is also called in constrain_symbol_range() when deserializing the the graph in _export, that's why replacement's effect propagates to the ExportedProgram, the potential solution might be avoiding code change in _update_var_to_range. Instead, move the code change to ShapeEnv._refine_ranges(). |
Moved the replacement logic to ShapeEnv._refine_ranges. Tested these test cases including reported failure ones. TORCH_LOGS="+dynamic,+dynamo" python test/dynamo/test_misc.py -k test_raise_guard_indirect_full_constraint TORCH_LOGS="+dynamic,+dynamo" python test/dynamo/test_misc.py -k test_guard_failure_fn_shape_control TORCH_LOGS="+dynamic,+dynamo" python test/dynamo/test_subclasses.py -k test_wrapper_subclass_guards_on_inner_tensor TORCH_LOGS="+dynamic,+dynamo" python test/export/test_serdes.py -k test_derived_dim_1_2_serdes |
@ezyang Could you review the change again? I changed the code in symbolic_shapes.py to fix test failures. |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Made dynamic dimension indirectly bound to an integer constrained.
After each ShapeEnv._refine_ranges, check if the new ValueRange is singleton, if it is, replace the symbol.
Fixes #122307
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @ezyang