High-efficiency floating-point neural network inference operators for mobile, server, and Web
cpu
neural-network
inference
multithreading
simd
matrix-multiplication
neural-networks
convolutional-neural-networks
convolutional-neural-network
inference-optimization
mobile-inference
-
Updated
Oct 18, 2024 - C