Skip to content
This repository has been archived by the owner on May 31, 2024. It is now read-only.

Latest commit

 

History

History
15 lines (10 loc) · 1.26 KB

README.md

File metadata and controls

15 lines (10 loc) · 1.26 KB

YOLO v3 Evaluator

So you might have started using the YOLO v3 Trainer and used the YOLO v3 detector successfully, but you really don't know if the given model is objectively really good, right?

This repository is inspired by the error analysis and evaluation of YOLO v1, as well as the tool Diagnosing Errors in Detectors by Derek Hoiem. It uses Python to calculate the errors in the following error categories:

  • Mean Average Precision
  • Error rate caused by displacement of the bounding box (checking the centers of the detected bounding box with the ground truth)
  • Error rate caused by a wrong sizing of the bounding box (checking the width and height of the bounding box)
  • Error rate caused by a wrong classification

This repository is using the YOLO v3 Detector as a dependency. Ideally, the evaluation script should be very easily modifiable so you could possibly use any detector (as long as you're getting valuable information back).

Getting started

TBD