Adlik: Toolkit for Accelerating Deep Learning Inference
-
Updated
Dec 27, 2023 - C++
Adlik: Toolkit for Accelerating Deep Learning Inference
Model optimizer used in Adlik.
How to export PyTorch models with unsupported layers to ONNX and then to Intel OpenVINO
Latest YOLO models inferencing using Intel Openvino toolkit
Demonstrates how to divide a DL model into multiple IR model files (division) and introduce a simplest way to implement a custom layer works with OpenVINO IR models.
Keras to Tensorflow test for Neural Compute Stick 2
YOLOv5 inferencing using Intel Openvino toolkit
Set up and run OpenVINO in Docker Ubuntu Environment on Intel CPU with Integrated Graphics
This sample shows how to convert TensorFlow model to OpenVINO IR model and how to quantize OpenVINO model.
This includes the basics of AI at the Edge, leverage pre-trained models available with the Intel® Distribution of OpenVINO Toolkit™, convert and optimize other models with the Model Optimizer, and perform inference with the Inference Engine.
Dockerfile to build Intel® Distribution of OpenVINO™ Toolkit docker image for Raspberry Pi
Explore the OpenVINO toolkit, focusing on components like model zoo, inference engine, and model optimizer, and how they can be used to perform deep learning and computer vision tasks.
Trabajo Fin de Máster: Estudio comparativo de un clasificador de imágenes en Raspberry Pi, de forma que se compara el tiempo de la inferencia en la Raspberry Pi con y sin el Neural Compute Stick (NCS). También se estudia como la complejidad de una red neuronal repercute en el tiempo de inferencia y se analiza si los tiempos obtenidos con el NCS …
ai-zipper offers numerous AI model compression methods, also it is easy to embed into your own source code
autooptimizer is a python package for optimize machine learning algorithms.
Serving Face Detection and Recognition Based on arc-face
Dockerfile for converting a frozen Tensorflow model to OpenVINO™ Intermediate Representation (IR) using Model Optimizer (MO)
AI based image classification inspired MobileNet V2 architecture by implementing changes in base architecture and details about using it as a quick response model (proposition) for rapid application as well as comparing it with other models for the same application.
Add a description, image, and links to the model-optimizer topic page so that developers can more easily learn about it.
To associate your repository with the model-optimizer topic, visit your repo's landing page and select "manage topics."