Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support fusion options for benchmark.py #10900

Merged
merged 3 commits into from
Mar 18, 2022
Merged

Conversation

zhangyaobit
Copy link
Contributor

Description: Describe your changes.

Motivation and Context

  • Why is this change required? What problem does it solve?
  • If it fixes an open issue, please link to the issue here.

@zhangyaobit zhangyaobit marked this pull request as ready for review March 16, 2022 23:26

if 'pt' in model_source:
with torch.no_grad():
onnx_model_file, is_valid_onnx_model, vocab_size, max_sequence_length = export_onnx_model_from_pt(
model_name, MODELS[model_name][1], MODELS[model_name][2], MODELS[model_name][3], model_class,
config_modifier, cache_dir, onnx_dir, input_names, use_gpu, precision, optimizer_info,
validate_onnx, use_raw_attention_mask, overwrite, model_fusion_statistics)
validate_onnx, use_raw_attention_mask, overwrite, model_fusion_statistics, fusion_options)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

might log some warning as fusion_options will only take effect when optimizer_info is set to by_script.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a warning.

@ytaous
Copy link
Contributor

ytaous commented Mar 17, 2022

    Run ONNXRuntime and TorchScript on CPU for all models with quantization:

can we append one example of fusion options here?
e.g. --disable_embed_layer_norm


Refers to: onnxruntime/python/tools/transformers/benchmark.py:35 in 0c79f92. [](commit_id = 0c79f92, deletion_comment = False)

@zhangyaobit
Copy link
Contributor Author

    Run ONNXRuntime and TorchScript on CPU for all models with quantization:

can we append one example of fusion options here? e.g. --disable_embed_layer_norm

Refers to: onnxruntime/python/tools/transformers/benchmark.py:35 in 0c79f92. [](commit_id = 0c79f92, deletion_comment = False)

Yes, added an example.

Copy link
Contributor

@wangyems wangyems left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@zhangyaobit zhangyaobit merged commit 5d4ff67 into master Mar 18, 2022
@zhangyaobit zhangyaobit deleted the zhanyao/fusionoption branch March 18, 2022 03:57
lavanyax pushed a commit to intel/onnxruntime that referenced this pull request Mar 29, 2022
* Support fusion options for benchmark.py

* Add fusion options for tf model export as well.

* Add command example and warning related to fusion options.
seddonm1 pushed a commit to seddonm1/onnxruntime that referenced this pull request May 15, 2022
* Support fusion options for benchmark.py

* Add fusion options for tf model export as well.

* Add command example and warning related to fusion options.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants