ONNX Runtime is a performance-focused complete scoring engine for Open Neural Network Exchange (ONNX) models, with an open extensible architecture to continually address the latest developments in AI and Deep Learning. ONNX Runtime stays up to date with the ONNX standard with complete implementation of all ONNX operators, and supports all ONNX releases (1.2+) with both future and backwards compatibility. Please refer to this page for ONNX opset compatibility details.
ONNX is an interoperable format for machine learning models supported by various ML and DNN frameworks and tools. The universal format makes it easier to interoperate between frameworks and maximize the reach of hardware optimization investments.
Setup
Getting Started
More Info
ONNX Runtime provides comprehensive support of the ONNX spec and can be used to run all models based on ONNX v1.2.1 and higher. See version compatibility details here.
Note: Some operators not supported in the current ONNX version may be available as a Contrib Operator
Traditional ML support
In addition to DNN models, ONNX Runtime fully supports the ONNX-ML profile of the ONNX spec for traditional ML scenarios.
ONNX Runtime supports both CPU and GPU. Using various graph optimizations and accelerators, ONNX Runtime can provide lower latency compared to other runtimes for faster end-to-end customer experiences and minimized machine utilization costs.
Currently ONNX Runtime supports the following accelerators:
- CPU
- MLAS (Microsoft Linear Algebra Subprograms)
- MKL-DNN
- MKL-ML
- Intel nGraph
- GPU
- CUDA
- TensorRT
Not all variations are supported in the official release builds, but can be built from source following these instructions.
We are continuously working to integrate new execution providers for further improvements in latency and efficiency. If you are interested in contributing a new execution provider, please see this page.
API documentation and package installation
ONNX Runtime is available for Linux, Windows, Mac with Python, C#, and C APIs, with more to come! If you have specific scenarios that are not currently supported, please share your suggestions and scenario details via Github Issues.
Quick Start: The ONNX-Ecosystem Docker container image is available on Dockerhub and includes ONNX Runtime (CPU, Python), dependencies, tools to convert from various frameworks, and Jupyter notebooks to help get started.
Additional dockerfiles for some features can be found here.
CPU (MLAS+Eigen) | CPU (MKL-ML) | GPU (CUDA) | |
---|---|---|---|
Python | pypi: onnxruntime Windows (x64) Linux (x64) Mac OS X (x64) |
-- | pypi: onnxruntime-gpu Windows (x64) Linux (x64) |
C# | Nuget: Microsoft.ML.OnnxRuntime Windows (x64, x86) Linux (x64, x86) Mac OS X (x64) |
Nuget: Microsoft.ML.OnnxRuntime.MKLML Windows (x64) Linux (x64) Mac OS X (x64) |
Nuget: Microsoft.ML.OnnxRuntime.Gpu Windows (x64) Linux (x64) |
C | Nuget: Microsoft.ML.OnnxRuntime .zip, .tgz Windows (x64, x86) Linux (x64, x86) Mac OS X (x64 |
Nuget: Microsoft.ML.OnnxRuntime.MKLML Windows (x64) Linux (x64) Mac OS X (x64) |
Nuget: Microsoft.ML.OnnxRuntime.Gpu .zip, .tgz Windows (x64) Linux (x64) |
- ONNX Runtime binaries in the CPU packages use OpenMP and depend on the library being available at runtime in the
system.
- For Windows, OpenMP support comes as part of VC runtime. It is also available as redist packages: vc_redist.x64.exe and vc_redist.x86.exe
- For Linux, the system must have libgomp.so.1 which can be installed using
apt-get install libgomp1
.
- GPU builds require the CUDA 10.0 and cuDNN 7.3 runtime libraries being installed on the system. Older releases used 9.1/7.1 - please refer to release notes for more details.
- Python binaries are compatible with Python 3.5-3.7. See Python Dev Notes
- Certain operators makes use of system locales. Installation of the English language package and configuring
en_US.UTF-8 locale
is required.- For Ubuntu install language-pack-en package
- Run the following commands:
locale-gen en_US.UTF-8
update-locale LANG=en_US.UTF-8
- Follow similar procedure to configure other locales on other platforms.
If additional build flavors are needed, please find instructions on building from source at Build ONNX Runtime. For production scenarios, it's strongly recommended to build from an official release branch.
Dockerfiles are available here to help you get started.
- The ONNX Model Zoo has popular ready-to-use pre-trained models.
- To export or convert a trained ONNX model trained from various frameworks, see ONNX Tutorials. Versioning comptability information can be found under Versioning
- Other services that can be used to create ONNX models include:
ONNX Runtime can be deployed to the cloud for model inferencing using Azure Machine Learning Services. See detailed instructions and sample notebooks.
ONNX Runtime Server (beta) is a hosted application for serving ONNX models using ONNX Runtime, providing a REST API for prediction. Usage details can be found here, and image installation instructions are here.
- Basic Inferencing Sample
- Inferencing (Resnet50)
- Inferencing samples using ONNX-Ecosystem Docker image
- Train, Convert, and Inference a SKL pipeline
- Convert and Inference a Keras model
- ONNX Runtime Server: SSD Single Shot MultiBox Detector
- Running ONNX model tests
Deployment with AzureML
- Inferencing: Inferencing Facial Expression Recognition, Inferencing MNIST Handwritten Digits, Resnet50 Image Classification, TinyYolo
- Train and Inference MNIST from Pytorch
- FER+ on Azure Kubernetes Service with TensorRT
- Add a custom operator/kernel
- Add an execution provider
- Add a new graph transform
- Add a new rewrite rule
We welcome contributions! Please see the contribution guidelines.
For any feedback or to report a bug, please file a GitHub Issue.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.