Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AudioLM] Graph break: const method call float.is_integer #121334

Open
ezyang opened this issue Mar 6, 2024 · 0 comments
Open

[AudioLM] Graph break: const method call float.is_integer #121334

ezyang opened this issue Mar 6, 2024 · 0 comments
Assignees
Labels
dynamo-triage-june2024 internal ramp-up task Tasks that are suitable for new folks w/ high-touch guidance from senior PyTorch folks module: dynamo module: graph breaks oncall: pt2 triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@ezyang
Copy link
Contributor

ezyang commented Mar 6, 2024

馃悰 Describe the bug

Discovered while compiling https://github.com/lucidrains/audiolm-pytorch/

[2024-03-06 11:43:47,766] [0/0] torch._dynamo.symbolic_convert.__graph_breaks: [DEBUG] Graph break: const method call float.is_integer from user code at:
[2024-03-06 11:43:47,766] [0/0] torch._dynamo.symbolic_convert.__graph_breaks: [DEBUG]   File "/data/users/ezyang/audiolm-pytorch/audiolm_pytorch/soundstream.py", line 830, in forward
[2024-03-06 11:43:47,766] [0/0] torch._dynamo.symbolic_convert.__graph_breaks: [DEBUG]     x = self.encoder_attn(x)
[2024-03-06 11:43:47,766] [0/0] torch._dynamo.symbolic_convert.__graph_breaks: [DEBUG]   File "/home/ezyang/local/miniconda3-test/envs/audiolm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[2024-03-06 11:43:47,766] [0/0] torch._dynamo.symbolic_convert.__graph_breaks: [DEBUG]     return forward_call(*args, **kwargs)
[2024-03-06 11:43:47,766] [0/0] torch._dynamo.symbolic_convert.__graph_breaks: [DEBUG]   File "/data/users/ezyang/audiolm-pytorch/audiolm_pytorch/soundstream.py", line 436, in forward
[2024-03-06 11:43:47,766] [0/0] torch._dynamo.symbolic_convert.__graph_breaks: [DEBUG]     x = attn(x, attn_bias = attn_bias) + x
[2024-03-06 11:43:47,766] [0/0] torch._dynamo.symbolic_convert.__graph_breaks: [DEBUG]   File "/home/ezyang/local/miniconda3-test/envs/audiolm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[2024-03-06 11:43:47,766] [0/0] torch._dynamo.symbolic_convert.__graph_breaks: [DEBUG]     return forward_call(*args, **kwargs)
[2024-03-06 11:43:47,766] [0/0] torch._dynamo.symbolic_convert.__graph_breaks: [DEBUG]   File "/home/ezyang/local/miniconda3-test/envs/audiolm/lib/python3.10/site-packages/local_attention-1.9.0-py3.10.egg/local_attention/transformer.py", line 106, in forward
[2024-03-06 11:43:47,766] [0/0] torch._dynamo.symbolic_convert.__graph_breaks: [DEBUG]     out = self.attn_fn(q, k, v, mask = mask, attn_bias = attn_bias)
[2024-03-06 11:43:47,766] [0/0] torch._dynamo.symbolic_convert.__graph_breaks: [DEBUG]   File "/home/ezyang/local/miniconda3-test/envs/audiolm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
[2024-03-06 11:43:47,766] [0/0] torch._dynamo.symbolic_convert.__graph_breaks: [DEBUG]     return forward_call(*args, **kwargs)
[2024-03-06 11:43:47,766] [0/0] torch._dynamo.symbolic_convert.__graph_breaks: [DEBUG]   File "/home/ezyang/local/miniconda3-test/envs/audiolm/lib/python3.10/site-packages/local_attention-1.9.0-py3.10.egg/local_attention/local_attention.py", line 126, in forward
[2024-03-06 11:43:47,766] [0/0] torch._dynamo.symbolic_convert.__graph_breaks: [DEBUG]     (needed_pad, q), (_, k), (_, v) = map(lambda t: pad_to_multiple(t, self.window_size, dim = -2), (q, k, v))
[2024-03-06 11:43:47,766] [0/0] torch._dynamo.symbolic_convert.__graph_breaks: [DEBUG]   File "/home/ezyang/local/miniconda3-test/envs/audiolm/lib/python3.10/site-packages/local_attention-1.9.0-py3.10.egg/local_attention/local_attention.py", line 126, in <lambda>
[2024-03-06 11:43:47,766] [0/0] torch._dynamo.symbolic_convert.__graph_breaks: [DEBUG]     (needed_pad, q), (_, k), (_, v) = map(lambda t: pad_to_multiple(t, self.window_size, dim = -2), (q, k, v))
[2024-03-06 11:43:47,766] [0/0] torch._dynamo.symbolic_convert.__graph_breaks: [DEBUG]   File "/home/ezyang/local/miniconda3-test/envs/audiolm/lib/python3.10/site-packages/local_attention-1.9.0-py3.10.egg/local_attention/local_attention.py", line 37, in pad_to_multiple
[2024-03-06 11:43:47,766] [0/0] torch._dynamo.symbolic_convert.__graph_breaks: [DEBUG]     if m.is_integer():
[2024-03-06 11:43:47,766] [0/0] torch._dynamo.symbolic_convert.__graph_breaks: [DEBUG] 

Should be pretty simple to fix.

Full repro code: https://gist.github.com/ezyang/64c24c9fc5529f3afed4ee4266f6adc5

Versions

main

cc @msaroufim @bdhirsh @anijain2305 @zou3519 @chauhang @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng

@williamwen42 williamwen42 added triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module module: dynamo module: graph breaks internal ramp-up task Tasks that are suitable for new folks w/ high-touch guidance from senior PyTorch folks labels Mar 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dynamo-triage-june2024 internal ramp-up task Tasks that are suitable for new folks w/ high-touch guidance from senior PyTorch folks module: dynamo module: graph breaks oncall: pt2 triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

4 participants