Use the widget below to experiment with YOLOv10. You can detect COCO classes such as people, vehicles, animals, household items.
YOLOv10 is a real-time state-of-the art object detection model that claims to have “46% less latency on multiple models” (YOLOv10, 2024). Additionally, YOLOv10 introduced NMS-Free training, which reduces the latency and enhances efficiency. YOLOv10 also offers several models ranging from N, S, M, B, L, X.
Learn how to train YOLOv10.
Model | Test Size (pixels) |
#Params (M) |
FLOPs (G) |
APval (%) |
Latency (ms) |
---|---|---|---|---|---|
YOLOv10-N | 640 | 2.3M | 6.7G | 38.5% | 1.84ms |
YOLOv10-S | 640 | 7.2M | 21.6G | 46.3% | 2.49ms |
YOLOv10-M | 640 | 15.4M | 59.1G | 51.1% | 4.74ms |
YOLOv10-B | 640 | 19.1M | 92.0G | 52.5% | 5.74ms |
YOLOv10-L | 640 | 24.4M | 120.3G | 53.2% | 7.28ms |
YOLOv10-X | 640 | 29.5M | 160.4G | 54.4% | 10.70ms |
(Graph sourced from the official Yolov10 repository)
After some testing, YOLOv10 struggles on far distance objects, even with lowered confidence. In the following image, YOLOv10 (left) is compared to YOLOv8 (right) using security footage. YOLOv8 clearly performs better due to the limited range capabilities of YOLOv10.
YOLOv10
is licensed under a
GNU Affero General Public
license.
You can use Roboflow Inference to deploy a
YOLOv10
API on your hardware. You can deploy the model on CPU (i.e. Raspberry Pi, AI PCs) and GPU devices (i.e. NVIDIA Jetson, NVIDIA T4).
Below are instructions on how to deploy your own model API.