Skip to content

Commit

Permalink
Revert "enable fx graph cache on torchbench (#128239)"
Browse files Browse the repository at this point in the history
Summary:
This reverts commit 734e8f6ad7e7f0fa0341fb658f1f986225173f5f.

Reverted pytorch/pytorch#128239 on behalf of https://github.com/huydhn due to Sorry for reverting your change but it seems to surface a bunch of inductor failures in trunk https://hud.pytorch.org/pytorch/pytorch/commit/734e8f6ad7e7f0fa0341fb658f1f986225173f5f ([comment](pytorch/pytorch#128239 (comment)))

Reviewed By: clee2000

Differential Revision: D58444069

fbshipit-source-id: 031136895b2a19ce20ae3c5c74ec43d925acb218
  • Loading branch information
pytorchmergebot authored and facebook-github-bot committed Jun 12, 2024
1 parent cb0f1fb commit 2a573fe
Showing 1 changed file with 0 additions and 4 deletions.
4 changes: 0 additions & 4 deletions userbenchmark/dynamo/dynamobench/torchbench.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,10 +25,6 @@
# We are primarily interested in tf32 datatype
torch.backends.cuda.matmul.allow_tf32 = True

# Enable FX graph caching
if "TORCHINDUCTOR_FX_GRAPH_CACHE" not in os.environ:
torch._inductor.config.fx_graph_cache = True


def _reassign_parameters(model):
# torch_geometric models register parameter as tensors due to
Expand Down

0 comments on commit 2a573fe

Please sign in to comment.