Skip to content

Latest commit

 

History

History
66 lines (35 loc) · 1.47 KB

API.md

File metadata and controls

66 lines (35 loc) · 1.47 KB

ONNX-Tensorflow API

onnx_tf.backend.prepare

Prepare an ONNX model for Tensorflow Backend. This function converts an ONNX model to an internel representation of the computational graph called TensorflowRep and returns the converted representation.

params:

model : The ONNX model to be converted.

device : The device to execute this model on. It can be either CPU (default) or CUDA.

strict : Whether to enforce semantic equivalence between the original model and the converted tensorflow model, defaults to True (yes, enforce semantic equivalence). Changing to False is strongly discouraged. Currently, the strict flag only affects the behavior of MaxPool and AveragePool ops.

logging_level : The logging level, default is INFO. Change it to DEBUG to see more conversion details or to WARNING to see less

auto_cast : Whether to auto cast data types that might lose precision for the tensors with types not natively supported by Tensorflow, default is False

returns:

A TensorflowRep class object representing the ONNX model

onnx_tf.backend_rep.TensorflowRep.export_graph

Export backend representation to a Tensorflow proto file. This function obtains the graph proto corresponding to the ONNX model associated with the backend representation and serializes to a protobuf file.

params:

path : The path to the output TF protobuf file.

returns:

none.