Skip to content

ONNX Runtime v1.18.0

Compare
Choose a tag to compare
@yihonglyu yihonglyu released this 21 May 00:28
4573740

Announcements

  • Windows ARM32 support has been dropped at the source code level.
  • Python version >=3.8 is now required for build.bat/build.sh (previously >=3.7). Note: If you have Python version <3.8, you can bypass the tools and use CMake directly.
  • The onnxruntime-mobile Android package and onnxruntime-mobile-c/onnxruntime-mobile-objc iOS cocoapods are being deprecated. Please use the onnxruntime-android Android package, and onnxruntime-c/onnxruntime-objc cocoapods, which support ONNX and ORT format models and all operators and data types. Note: If you require a smaller binary size, a custom build is required. See details on creating a custom Android or iOS package on Custom build | onnxruntime.

Build System & Packages

  • CoreML execution provider now depends on coremltools.
  • Flatbuffers has been upgraded from 1.12.0 → 23.5.26.
  • ONNX has been upgraded from 1.15 → 1.16.
  • EMSDK has been upgraded from 3.1.51 → 3.1.57.
  • Intel neural_speed library has been upgraded from v0.1.1 → v0.3 with several important bug fixes.
  • There is a new onnxruntime_CUDA_MINIMAL CMake option for building ONNX Runtime CUDA execution provider without any operations apart from memcpy ops.
  • Added support for Catalyst for macOS build support.
  • Added initial support for RISC-V and three new build options for it: --rv64, --riscv_toolchain_root, and --riscv_qemu_path.
  • Now you can build TensorRT EP with protobuf-lite instead of the full version of protobuf.
  • Some security-related compile/link flags have been moved from the default setting → new build option: --use_binskim_compliant_compile_flags. Note: All our release binaries are built with this flag, but when building ONNX Runtime from source, this flag is default OFF.
  • Windows ARM64 build now depends on PyTorch CPUINFO library.
  • Windows OneCore build now uses “Reverse forwarding” apisets instead of “Direct forwarding”, so onnxruntime.dll in our Nuget packages will depend on kernel32.dll. Note: Windows systems without kernel32.dll need to have reverse forwarders (see API set loader operation - Win32 apps | Microsoft Learn for more information).

Core

  • Added ONNX 1.16 support.
  • Added additional optimizations related to Dynamo-exported models.
  • Improved testing infrastructure for EPs developed as shared libraries.
  • Exposed Reserve() in OrtAllocator to allow custom allocators to work when session.use_device_allocator_for_initializers is specified.
  • Improved lock contention due to memory allocations.
  • Improved session creation time (graph and graph transformer optimizations).
  • Added new SessionOptions config entry to disable specific transformers and rules.
  • [C# API] Exposed SessionOptions.DisablePerSessionThreads to allow sharing of threadpool between sessions.
  • [Java API] Added CUDA 12 Java support.

Performance

  • Improved 4bit quant support:
    • Added HQQ quantization support to improve accuracy.
    • Implemented general GEMM kernel and improved GEMV kernel performance on GPU.
    • Improved GEMM kernel quality and performance on x64.
    • Implemented general GEMM kernel and improved GEMV performance on ARM64.
  • Improved MultiheadAttention performance on CPU.

Execution Providers

  • TensorRT

    • Added support for TensorRT 10.
    • Finalized support for DDS ops.
    • Added Python support for user provided CUDA stream.
    • Fixed various bugs.
  • CUDA

    • Added support of multiple CUDA graphs.
    • Added a provider option to disable TF32.
    • Added Python support for user provided CUDA stream.
    • Extended MoE to support of Tensor Parallelism and int4 quantization.
    • Fixed bugs in BatchNorm and TopK kernel.
  • QNN

    • Added support for up to QNN SDK 2.22.
    • Upgraded support from A16W8 → mixed 8/16-bit precision configurability per layer.
    • Added fp16 execution support via enable_htp_fp16 option.
    • Added multiple partition support for QNN context binary.
    • Expanded operator support and fixed various bugs.
    • Added support for per-channel quantized weights for Conv.
    • Integration with Qualcomm’s AIHub.
  • OpenVINO

    • Added support for up to OpenVINO 2024.1.
    • Added support for importing pre-compiled blob as EPContext blob.
    • Separated device and precision as inputs by removing support for device_id in provider options and adding precision as separate CLI option.
    • Deprecated CPU_FP32 and GPU_FP32 terminology and introduced CPU and GPU terminology.
    • AUTO:GPU,CPU will only create GPU blob, not CPU blob.
  • DirectML

    • Additional ONNX operator support: Resize-18 and Resize-19, Col2Im-18, InNaN-20, IsInf-20, and ReduceMax-20.
    • Additional contrib op support: SimplifiedLayerNormalization, SkipSimplifiedLayerNormalization, QLinearAveragePool, MatMulIntegerToFloat, GroupQueryAttention, DynamicQuantizeMatMul, and QAttention.

Mobile

  • Improved performance of ARM64 4-bit quantization.
  • Added support for building with QNN on Android.
  • Added MacCatalyst support.
  • Added visionOS support.
  • Added initial support for creating ML Program format CoreML models.
  • Added support for 1D Conv and ConvTranspose to XNNPACK EP.

Web

  • Added WebNN EP preview.
  • Improved WebGPU performance (MHA, ROE).
  • Added more WebGPU and WebNN examples.
  • Increased generative model support.
  • Optimized Buffer management to reduce memory footprint.

Training

  • Large Model Training
    • Added optimizations for Dynamo-exported models.
    • Added Mixtral integration using ORT backend.
  • On-Device Training
    • Added support for models >2GB to enable SLM training on edge devices.

GenAI

  • Added additional model support: Phi-3, Gemma, LLama-3.
  • Added DML EP support.
  • Improved tokenizer quality.
  • Improved sampling method and ORT model performance.

Extensions

  • Created Java packaging pipeline and published to Maven repository.
  • Added support for conversion of Huggingface FastTokenizer into ONNX custom operator.
  • Unified the SentencePiece tokenizer with other Byte Pair Encoding (BPE) based tokenizers.
  • Fixed Whisper large model pre-processing bug.
  • Enabled eager execution for custom operator and refactored the header file structure.

Contributors

Yi Zhang, Yulong Wang, Adrian Lizarraga, Changming Sun, Scott McKay, Tianlei Wu, Peng Wang, Hector Li, Edward Chen, Dmitri Smirnov, Patrice Vignola, Guenther Schmuelling, Ye Wang, Chi Lo, Wanming Lin, Xu Xing, Baiju Meswani, Peixuan Zuo, Vincent Wang, Markus Tavenrath, Lei Cao, Kunal Vaishnavi, Rachel Guo, Satya Kumar Jandhyala, Sheil Kumar, Yifan Li, Jiajia Qin, Maximilian Müller, Xavier Dupré, Yi-Hong Lyu, Yufeng Li, Alejandro Cid Delgado, Adam Louly, Prathik Rao, wejoncy, Zesong Wang, Adam Pocock, George Wu, Jian Chen, Justin Chu, Xiaoyu, guyang3532, Jingyan Wang, raoanag, Satya Jandhyala, Hariharan Seshadri, Jiajie Hu, Sumit Agarwal, Peter Mcaughan, Zhijiang Xu, Abhishek Jindal, Jake Mathern, Jeff Bloomfield, Jeff Daily, Linnea May, Phoebe Chen, Preetha Veeramalai, Shubham Bhokare, Wei-Sheng Chin, Yang Gu, Yueqing Zhang, Guangyun Han, inisis, ironman, Ivan Berg, Liqun Fu, Yu Luo, Rui Ren, Sahar Fatima, snadampal, wangshuai09, Zhenze Wang, Andrew Fantino, Andrew Grigorev, Ashwini Khade, Atanas Dimitrov, AtomicVar, Belem Zhang, Bowen Bao, Chen Fu, Dhruv Matani, Fangrui Song, Francesco, Frank Dong, Hans Chen, He Li, Heflin Stephen Raj, Jambay Kinley, Masayoshi Tsutsui, Matttttt, Nanashi, Phoebe Chen, Pranav Sharma, Segev Finer, Sophie Schoenmeyer, TP Boudreau, Ted Themistokleous, Thomas Boby, Xiang Zhang, Yongxin Wang, Zhang Lei, aamajumder, danyue, Duansheng Liu, enximi, fxmarty, kailums, maggie1059, mindest, mo-ja, moyo1997
Big thank you to everyone who contributed to this release!