- OpenMV - A camera that runs with MicroPython on ARM Cortex M6/M7 and great support for computer vision algorithms. Now with support for Tensorflow Lite too.
- JeVois - A TensorFlow-enabled camera module.
- Edge TPU - Google’s purpose-built ASIC designed to run inference at the edge.
- Movidius - Intel's family of SoCs designed specifically for low power on-device computer vision and neural network applications.
- UP AI Edge - Line of products based on Intel Movidius VPUs (including Myriad 2 and Myriad X) and Intel Cyclone FPGAs.
- DepthAI - An embedded platform for combining Depth and AI, built around Myriad X
- NVIDIA Jetson - High-performance embedded system-on-module to unlock deep learning, computer vision, GPU computing, and graphics in network-constrained environments.
- Jetson TX1
- Jetson TX2
- Jetson Nano
- Artificial Intelligence Radio - Transceiver (AIR-T) - High-performance SDR seamlessly integrated with state-of-the-art deep learning hardware.
- Kendryte K210 - Dual-core, RISC-V chip with convolutional neural network acceleration using 64 KLUs (Kendryte Arithmetic Logic Unit).
- Kendryte K510 - Tri-core RISC-V processor clocked with AI accelerators.
- GreenWaves GAP8 - RISC-V-based chip with hardware acceleration for convolutional operations.
- Ultra96 - Embedded development platform featuring a Xilinx UltraScale+ MPSoC FPGA.
- Apollo3 Blue - SparkFun Edge Development Board powered by a Cortex M4 from Ambiq Micro.
- Google Coral - Platform of hardware components and software tools for local AI products based on Google Edge TPU coprocessor.
- Dev boards
- USB Accelerators
- PCIe / M.2 modules
- Gyrfalcon Technology Lighspeeur - Family of chips optimized for edge computing.
- ARM microNPU - Processors designed to accelerate ML inference (being the first one the Ethos-U55).
- Espressif ESP32-S3 - SoC similar to the well-known ESP32 with support for AI acceleration (among many other interesting differences).
- Maxim MAX78000 - SoC based on a Cortex-M4 that includes a CNN accelerator.
- Beagleboard BeagleV - Open Source RISC-V-based Linux board that includes a Neural Network Engine.
- Syntiant TinyML - Development kit based on the Syntiant NDP101 Neural Decision Processor and a SAMD21 Cortex-M0+.
- TensorFlow Lite - Lightweight solution for mobile and embedded devices which enables on-device machine learning inference with low latency and a small binary size.
- TensorFlow Lite for Microcontrollers - Port of TF Lite for microcontrollers and other devices with only kilobytes of memory. Born from a merge with uTensor.
- Embedded Learning Library (ELL) - Microsoft's library to deploy intelligent machine-learned models onto resource constrained platforms and small single-board computers.
- uTensor - AI inference library based on mbed (an RTOS for ARM chipsets) and TensorFlow.
- CMSIS NN - A collection of efficient neural network kernels developed to maximize the performance and minimize the memory footprint of neural networks on Cortex-M processor cores.
- ARM Compute Library - Set of optimized functions for image processing, computer vision, and machine learning.
- Qualcomm Neural Processing SDK for AI - Libraries to developers run NN models on Snapdragon mobile platforms taking advantage of the CPU, GPU and/or DSP.
- ST X-CUBE-AI - Toolkit for generating NN optimiezed for STM32 MCUs.
- ST NanoEdgeAIStudio - Tool that generates a model to be loaded into an STM32 MCU.
- Neural Network on Microcontroller (NNoM) - Higher-level layer-based Neural Network library specifically for microcontrollers. Support for CMSIS-NN.
- nncase - Open deep learning compiler stack for Kendryte K210 AI accelerator.
- deepC - Deep learning compiler and inference framework targeted to embedded platform.
- uTVM - MicroTVM is an open source tool to optimize tensor programs.
- Edge Impulse - Interactive platform to generate models that can run in microcontrollers. They are also quite active on social netwoks talking about recent news on EdgeAI/TinyML.
- Qeexo AutoML - Interactive platform to generate AI models targetted to microcontrollers.
- mlpack - C++ header-only fast machine learning library that focuses on lightweight deployment. It has a wide variety of machine learning algorithms with the possibility to realize on-device learning on MPUs.
- AIfES - platform-independent and standalone AI software framework optimized for embedded systems.
- onnx2c - ONNX to C compiler targeting "Tiny ML".
- Benchmarking Edge Computing (May 2019)
- Hardware benchmark for edge AI on cubesats - Open Source Cubesat Workshop 2018
- Why Machine Learning on The Edge?
- Tutorial: Low Power Deep Learning on the OpenMV Cam
- TinyML: Machine Learning with TensorFlow on Arduino and Ultra-Low Power Micro-Controllers - O'Reilly book written by Pete Warden, Daniel Situnayake.
- tinyML Summit - Annual conference and monthly meetup celebrated in California, USA. Talks and slides are usually available from the website.
- TinyML Papers and Projects - Compilation of the most recent paper's and projects in the TinyML/EdgeAI field.
- MinUn - Accurate ML Inference on Microcontrollers.
Throughout my career as a Senior Firmware Engineer, I have immersed myself in Edge AI, working on innovative projects that leverage artificial intelligence at the edge of networks. My journey began with a strong foundation in ARM-based development, where I honed my skills on platforms like Qualcomm, NXP, and Rockchip.
I have successfully developed AI-driven IoT devices using NVIDIA Jetson, CUDA, and TensorRT, enabling advanced functionalities in real-time applications. My expertise in various connectivity solutions, including BLE, LoRa, LTE, and NB-IoT, has allowed me to create seamless communication between devices, enhancing their overall performance.
Working with TinyML, Raspberry Pi, and frameworks like Zephyr and FreeRTOS, I have implemented robust testing methodologies that ensure reliability and efficiency in my projects. This hands-on experience has not only deepened my technical skills but also fostered a passion for pushing the boundaries of what’s possible in Edge AI.
Advice for New Edge AI Enthusiasts
-
Start with the Basics: Familiarize yourself with fundamental concepts in AI and IoT. Understanding how these technologies interact is crucial.
-
Hands-On Practice: Engage in projects that allow you to apply your knowledge. Platforms like Raspberry Pi and NVIDIA Jetson are great for experimentation.
-
Learn Programming Languages: Proficiency in languages like C/C++, Python, and Rust will be invaluable. Focus on mastering one language before expanding your skill set.
-
Explore Connectivity Solutions: Understanding various connectivity protocols (BLE, LoRa, etc.) is essential for creating effective Edge AI solutions.
-
Join Communities: Connect with other enthusiasts and professionals through forums, online courses, and local meetups. Sharing knowledge and experiences can accelerate your learning.
-
Stay Updated: The field of Edge AI is rapidly evolving. Follow industry news, research papers, and attend webinars to keep your skills current.
-
Embrace Challenges: Don’t be afraid to tackle complex problems. Each challenge you face is an opportunity to learn and grow.
By following these steps and maintaining a curious mindset, new enthusiasts can thrive in the exciting field of Edge AI.