Block or Report
Block or report dtoya
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abuseStars
Language
Sort by: Recently starred
Development repository for the Triton language and compiler
OpenAI Triton backend for Intel® GPUs
Generative AI Examples is a collection of GenAI examples such as ChatQnA, Copilot, which illustrate the pipeline capabilities of the Open Platform for Enterprise AI (OPEA) project.
Pretrain, finetune and serve LLMs on Intel platforms with Ray
A CPU tool for benchmarking the peak of floating points
Software Development Kit (SDK) for the Intel® Geti™ platform for Computer Vision AI model training.
Real-time video and audio streams over the network, with Streamlit.
How to export PyTorch models with unsupported layers to ONNX and then to Intel OpenVINO
A repository for storing models that have been inter-converted between various frameworks. Supported frameworks are TensorFlow, PyTorch, ONNX, OpenVINO, TFJS, TFTRT, TensorFlowLite (Float32/16/INT8…
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
OpenVINO backend for Triton.
📚 Jupyter notebook tutorials for OpenVINO™
The ROCm Validation Suite is a system administrator’s and cluster manager's tool for detecting and troubleshooting common problems affecting AMD GPU(s) running in a high-performance computing envir…
『ゼロから作る Deep Learning』(O'Reilly Japan, 2016)
OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL™ Driver
intel / onnxruntime
Forked from microsoft/onnxruntimeONNX Runtime: cross-platform, high performance scoring engine for ML models
『ゼロから作る Deep Learning ❷』(O'Reilly Japan, 2018)
This is implementation of YOLOv4,YOLOv4-relu,YOLOv4-tiny,YOLOv4-tiny-3l,Scaled-YOLOv4 and INT8 Quantization in OpenVINO2021.3
Source code for experience kits with Ansible-based deployment.