Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[export] Export warnings as no-ops #113792

Closed
angelayi opened this issue Nov 15, 2023 · 7 comments
Closed

[export] Export warnings as no-ops #113792

angelayi opened this issue Nov 15, 2023 · 7 comments
Assignees
Labels
module: dynamo oncall: pt2 triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@angelayi
Copy link
Contributor

angelayi commented Nov 15, 2023

🐛 Describe the bug

import torch
import warnings

def f(x):
    warnings.warn("moo")
    return x + x

ep = torch.export.export(f, (torch.ones(1, 3),))

Results in:

Traceback (most recent call last):
  ...
  File "/data/users/angelayi/pytorch/torch/_dynamo/symbolic_convert.py", line 1209, in CALL_FUNCTION
    self.call_function(fn, args, {})
  File "/data/users/angelayi/pytorch/torch/_dynamo/symbolic_convert.py", line 650, in call_function
    self.push(fn.call_function(self, args, kwargs))
  File "/data/users/angelayi/pytorch/torch/_dynamo/variables/user_defined.py", line 394, in call_function
    return self.call_method(tx, "__call__", args, kwargs)
  File "/data/users/angelayi/pytorch/torch/_dynamo/variables/user_defined.py", line 306, in call_method
    return super().call_method(tx, name, args, kwargs)
  File "/data/users/angelayi/pytorch/torch/_dynamo/variables/base.py", line 317, in call_method
    raise unimplemented(f"call_method {self} {name} {args} {kwargs}")
  File "/data/users/angelayi/pytorch/torch/_dynamo/exc.py", line 193, in unimplemented
    raise Unsupported(msg)
torch._dynamo.exc.Unsupported: call_method UserDefinedObjectVariable(warn) __call__ [ConstantVariable(str)] {}

Proposal is to trace this warning into a no-op.

cc @ezyang @msaroufim @wconstab @bdhirsh @anijain2305 @zou3519 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @yanboliang

Versions

master

@angelayi angelayi added module: dynamo triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module oncall: pt2 labels Nov 15, 2023
@angelayi angelayi self-assigned this Nov 15, 2023
@yanboliang
Copy link
Contributor

We don't have ATen level ops to represent loggings, so I think it's fine to wrap them as no-ops to avoid graph breaks. But I think we should print out a warning message to let users aware of this. cc @jansel @ezyang

@ezyang
Copy link
Contributor

ezyang commented Nov 15, 2023

dupe of #93739

@yanboliang
Copy link
Contributor

@ezyang I understand the pros and cons of wrapping them as no-ops or reorder them, do you think we can have a config (e.g., prints_warning_as_no_op=False) for users to mitigate these graph breaks? This config can be False by default.

@ezyang
Copy link
Contributor

ezyang commented Nov 16, 2023

I don't think it is too much work to fix this properly, let's decide what we want to do and just do it one go.

@angelayi
Copy link
Contributor Author

@ezyang so is ok to decide that we will wrap these print/warning/log library functions as a no-op? For export cases we can just ignore these functions, and for torch.compile cases we reorder it to after the graph?

pytorchmergebot pushed a commit that referenced this issue Mar 1, 2024
Currently when there is a print/warning in the graph, dynamo graph breaks causing export to fail. However export would like to just skip over these print/warning calls: #113792.

Additionally there's a torch.compile feature request to "reorder prints" so that instead of graph breaking when hitting prints/logging, we can skip over these prints to create larger compiled graphs, and then print the results out after those compiled graphs: #93739. This PR also adds the `reorderable_logging_functions` config for users to register logging functions to be reordered (like `print` or a custom logging function). Printout of the bytecode after reordering the prints looks like the following: P914736600

There are some limitations to the printing right now:
* You can only register logging functions, not methods
* Inputs to the logging functions can only be tensors, constants, and format strings
* Inputs to the logging functions which will later be mutated in-place will not be printed correctly

TODO: Add the following tests
* print function with argument of nested data structure;
* print function with argument of nested data structure being updated inside of compile region (this would test if we handle side effect correctly);
* custom defined logging functions with nn.Module or nn.Module attribute arguments;
* custom defined logging functions with submodule input/output as arguments (we need to handle the mapping and fused-out value);
* custom defined logging functions with tensor argument and mutation inside of the function (TBD: this may increase memory usage);

Pull Request resolved: #116106
Approved by: https://github.com/yanboliang
pytorchmergebot pushed a commit that referenced this issue Mar 1, 2024
Currently when there is a print/warning in the graph, dynamo graph breaks causing export to fail. However export would like to just skip over these print/warning calls: #113792.

Additionally there's a torch.compile feature request to "reorder prints" so that instead of graph breaking when hitting prints/logging, we can skip over these prints to create larger compiled graphs, and then print the results out after those compiled graphs: #93739. This PR also adds the `reorderable_logging_functions` config for users to register logging functions to be reordered (like `print` or a custom logging function). Printout of the bytecode after reordering the prints looks like the following: P914736600

There are some limitations to the printing right now:
* You can only register logging functions, not methods
* Inputs to the logging functions can only be tensors, constants, and format strings
* Inputs to the logging functions which will later be mutated in-place will not be printed correctly

TODO: Add the following tests
* print function with argument of nested data structure;
* print function with argument of nested data structure being updated inside of compile region (this would test if we handle side effect correctly);
* custom defined logging functions with nn.Module or nn.Module attribute arguments;
* custom defined logging functions with submodule input/output as arguments (we need to handle the mapping and fused-out value);
* custom defined logging functions with tensor argument and mutation inside of the function (TBD: this may increase memory usage);

Pull Request resolved: #116106
Approved by: https://github.com/yanboliang
@anijain2305
Copy link
Contributor

Any update on this?

@angelayi
Copy link
Contributor Author

This is done!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: dynamo oncall: pt2 triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

4 participants