Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ONNXRuntime adding custom op #232

Open
OrkhanHI opened this issue Jan 8, 2021 · 3 comments
Open

ONNXRuntime adding custom op #232

OrkhanHI opened this issue Jan 8, 2021 · 3 comments
Labels

Comments

@OrkhanHI
Copy link

OrkhanHI commented Jan 8, 2021

Question

I have created C++ script, build the binary .so file and imported with load library and added my new function to ONNX.
I have loaded and checked the model, everything looks fine.

Now, I would like to run the model with ONNXRuntime, however I am getting:

Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from generator.onnx failed:Fatal error: my_grid_sampler is not a registered function/op

The provided tutorial does not tell clearly directory and file names that need to be created.

Could you please elaborate clearly the steps, to register my custom op in ONNXRuntime with the real example, file names?

@mldlwizard
Copy link

I am working on a custom activation function and have implemented it to Onnx as a function i.e an operator schema with a combination of primitive Onnx operators.
I even wrote and implemented a backend test file for the same and it was successful.
I wanted to note down the operator execution time and hence, made use of the helper.make_node and helper.make_graph to create a model with only that operator as a node. The idea was to execute the operator on ORT and note the inference times.
However,
when the model is visualized in Netron the model parameters are as follows:
formats - ONNX V7
imports- ai.onnx v14
which means the opset version is 14.
The opset version is 14 for any model created using helper.make_node, helper.make_graph.
This is resulting in an ORT error that opset 14 is experimental and not supported.
The error is as follows:

"Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from testng.onnx failed:/onnxruntime_src/onnxruntime/core/graph/model_load_utils.h:47 void onnxruntime::model_load_utils::ValidateOpsetForDomain(const std::unordered_map<std::basic_string, int>&, const onnxruntime::logging::Logger&, bool, const string&, int) ONNX Runtime only guarantees support for models stamped with official released onnx opset versions. Opset 14 is under development and support for this is limited. The operator schemas and or other functionality may change before next ONNX release and in this case ONNX Runtime will not guarantee backward compatibility. Current official support for domain ai.onnx is till opset 13."

Please help me know how to change the opset version for a model having only an individual operator created using helper.make_node and helper.make_graph.
Also, as @OrkhanHI has asked do I need to implement this operator again in ORT to be able to execute it? If yes, what is the flow?

@hariharans29
Copy link
Contributor

https://www.onnxruntime.ai/docs/how-to/add-custom-op.html has some docs surrounding how to add custom ops in ONNXRuntime and refers to sample/test code in the links there

@PKarhade
Copy link

@mldlwizard were you able to find any solution to your issue? I am kind of facing the similar problem and would like to hear if there are any work arounds.
Thanks in advance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants