MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba
-
Updated
Jul 4, 2024 - C++
MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba
带你从零实现一个高性能的深度学习推理库,支持大模型 llama2 、Unet、Yolov5、Resnet等模型的推理。Implement a high-performance deep learning inference library step by step
Blazing-fast Expression Templates Library (ETL) with GPU support, in C++
🌠 Bloom Simulation Software for Windows
High Performance Computing (HPC) and Signal Processing Framework
Active Convolution
Reverb effect using hybrid impulse convolution
An example of C++ extension for PyTorch.
This project implements a convolution kernel based on vivado HLS on zcu104
This project is about convolution operator optimization on GPU, include GEMM based (Implicit GEMM) convolution.
A C++ library that produces images based on different luminosity profiles
Latte is a convolutional neural network (CNN) inference engine written in C++ and uses AVX to vectorize operations. The engine runs on Windows 10, Linux and macOS Sierra.
2D and 3D Matrix Convolution and Matrix Multiplication with CUDA
GUI Image-Editor written in C++ and Qt. Provides functionalities as Editing and Kernel convolution.
Winograd Convolution Implementation using by cpp
C++ header only template library designed to make it easier to write high-performance SIMD (SSE, AVX, Neon) and multi-threaded code.
Add a description, image, and links to the convolution topic page so that developers can more easily learn about it.
To associate your repository with the convolution topic, visit your repo's landing page and select "manage topics."