Skip to content

Commit

Permalink
enable fx graph cache on torchbench (#128239)
Browse files Browse the repository at this point in the history
Summary:
We've already enabled for timm and huggingface, but we had failures saving cache entries for moco. It looks like pytorch/pytorch#128052 has fixed that issue, so we can enable for torchbench.

X-link: pytorch/pytorch#128239
Approved by: https://github.com/oulgen

Reviewed By: clee2000

Differential Revision: D58501160

Pulled By: masnesral

fbshipit-source-id: b7ceeecbf632671059a07e17851d4301c79945ec
  • Loading branch information
masnesral authored and facebook-github-bot committed Jun 13, 2024
1 parent f29ff89 commit 04a1339
Showing 1 changed file with 4 additions and 0 deletions.
4 changes: 4 additions & 0 deletions userbenchmark/dynamo/dynamobench/torchbench.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,10 @@
# We are primarily interested in tf32 datatype
torch.backends.cuda.matmul.allow_tf32 = True

# Enable FX graph caching
if "TORCHINDUCTOR_FX_GRAPH_CACHE" not in os.environ:
torch._inductor.config.fx_graph_cache = True


def _reassign_parameters(model):
# torch_geometric models register parameter as tensors due to
Expand Down

0 comments on commit 04a1339

Please sign in to comment.