Stars
Official inference library for Mistral models
Olive: Simplify ML Model Finetuning, Conversion, Quantization, and Optimization for CPUs, GPUs and NPUs.
Generative AI extensions for onnxruntime
AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:
An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.
ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.
⚡Delightful WebNN resources, curated list of awesome things around WebNN ecosystem.😎
A tool to modify ONNX models in a visualization fashion, based on Netron and Flask.
TorchBench is a collection of open source benchmarks used to evaluate PyTorch performance.
Friendly machine learning for the web! 🤖
A collection of pre-trained, state-of-the-art models in the ONNX format
Convert TensorFlow, Keras, Tensorflow.js and Tflite models to ONNX
DirectML is a high-performance, hardware-accelerated DirectX 12 library for machine learning. DirectML provides GPU acceleration for common machine learning tasks across a broad range of supported …
Simple package that makes your generator work in background thread
Use AnimeGANv3 to make your own animation works, including turning photos or videos into anime.
👾 Fast and simple video download library and CLI tool written in Go
Fast and Lightweight Observability Data Collector
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Prebuilt binary for TensorFlowLite's standalone installer. For RaspberryPi. A very lightweight installer. I provide a FlexDelegate, MediaPipe Custom OP and XNNPACK enabled binary.
Prebuilt binary with Tensorflow Lite enabled. For RaspberryPi / Jetson Nano. Support for custom operations in MediaPipe. XNNPACK, XNNPACK Multi-Threads, FlexDelegate.
Automated CI toolchain to produce precompiled opencv-python, opencv-python-headless, opencv-contrib-python and opencv-contrib-python-headless packages.