Skip to content

yjh0410/yolov2-yolov3_PyTorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Update

Recently, I have released a new YOLO project:

https://github.com/yjh0410/PyTorch_YOLO_Tutorial

In my new YOLO project, you can enjoy:

  • a new and stronger YOLOv1
  • a new and stronger YOLOv2
  • YOLOv3
  • YOLOv4
  • YOLOv5
  • YOLOv7
  • YOLOX
  • RTCDet

This project

In this project, you can enjoy:

  • YOLOv2 with DarkNet-19
  • YOLOv2 with ResNet-50
  • YOLOv2Slim
  • YOLOv3
  • YOLOv3-Spp
  • YOLOv3-Tiny

I just want to provide a good YOLO project for everyone who is interested in Object Detection.

Weights

Google Drive: https://drive.google.com/drive/folders/1T5hHyGICbFSdu6u2_vqvxn_puotvPsbd?usp=sharing

BaiDuYunDisk: https://pan.baidu.com/s/1tSylvzOVFReUAvaAxKRSwg Password d266

You can download all my models from the above links.

YOLOv2

YOLOv2 with DarkNet-19

Tricks

Tricks in official paper:

  • batch norm
  • hi-res classifier
  • convolutional
  • anchor boxes
  • new network
  • dimension priors
  • location prediction
  • passthrough
  • multi-scale
  • hi-red detector

VOC2007

size Original (darknet) Ours (pytorch) 160peochs Ours (pytorch) 250epochs
VOC07 test 416 76.8 76.0 77.1
VOC07 test 544 78.6 77.0 78.1

COCO

data AP AP50 AP75 AP_S AP_M AP_L
Original (darknet) COCO test-dev 21.6 44.0 19.2 5.0 22.4 35.5
Ours (pytorch) COCO test-dev 26.8 46.6 26.8 5.8 27.4 45.2
Ours (pytorch) COCO eval 26.6 46.0 26.7 5.9 27.8 47.1

YOLOv2 with ResNet-50

I replace darknet-19 with resnet-50 and get a better result on COCO-val

data AP AP50 AP75 AP_S AP_M AP_L
Our YOLOv2-320 COCO eval 25.8 44.6 25.9 4.6 26.8 47.9
Our YOLOv2-416 COCO eval 29.0 48.8 29.7 7.4 31.9 48.3
Our YOLOv2-512 COCO eval 30.4 51.6 30.9 10.1 34.9 46.6
Our YOLOv2-544 COCO eval 30.4 51.9 30.9 11.1 35.8 45.5
Our YOLOv2-608 COCO eval 29.2 51.6 29.1 13.6 36.8 40.5

YOLOv3

VOC2007

size Original (darknet) Ours (pytorch) 250epochs
VOC07 test 416 80.25 81.4

COCO

Official YOLOv3:

data AP AP50 AP75 AP_S AP_M AP_L
YOLOv3-320 COCO test-dev 28.2 51.5 - - - -
YOLOv3-416 COCO test-dev 31.0 55.3 - - - -
YOLOv3-608 COCO test-dev 33.0 57.0 34.4 18.3 35.4 41.9

Our YOLOv3:

data AP AP50 AP75 AP_S AP_M AP_L
YOLOv3-320 COCO test-dev 33.1 54.1 34.5 12.1 34.5 49.6
YOLOv3-416 COCO test-dev 36.0 57.4 37.0 16.3 37.5 51.1
YOLOv3-608 COCO test-dev 37.6 59.4 39.9 20.4 39.9 48.2

YOLOv3SPP

COCO:

data AP AP50 AP75 AP_S AP_M AP_L
YOLOv3Spp-320 COCO eval 32.78 53.79 33.9 12.4 35.5 50.6
YOLOv3Spp-416 COCO eval 35.66 57.09 37.4 16.8 38.1 50.7
YOLOv3Spp-608 COCO eval 37.52 59.44 39.3 21.5 40.6 49.6

YOLOv3Tiny

data AP AP50 AP75 AP_S AP_M AP_L
(official) YOLOv3Tiny COCO test-dev - 33.1 - - - -
(Our) YOLOv3Tiny COCO val 15.9 33.8 12.8 7.6 17.7 22.4

Installation

  • Pytorch-gpu 1.1.0/1.2.0/1.3.0
  • Tensorboard 1.14.
  • opencv-python, python3.6/3.7

Dataset

VOC Dataset

I copy the download files from the following excellent project: https://github.com/amdegroot/ssd.pytorch

I have uploaded the VOC2007 and VOC2012 to BaiDuYunDisk, so for researchers in China, you can download them from BaiDuYunDisk:

Link:https://pan.baidu.com/s/1tYPGCYGyC0wjpC97H-zzMQ

Password:4la9

You will get a VOCdevkit.zip, then what you need to do is just to unzip it and put it into data/. After that, the whole path to VOC dataset is data/VOCdevkit/VOC2007 and data/VOCdevkit/VOC2012.

Download VOC2007 trainval & test

# specify a directory for dataset to be downloaded into, else default is ~/data/
sh data/scripts/VOC2007.sh # <directory>

Download VOC2012 trainval

# specify a directory for dataset to be downloaded into, else default is ~/data/
sh data/scripts/VOC2012.sh # <directory>

MSCOCO Dataset

I copy the download files from the following excellent project: https://github.com/DeNA/PyTorch_YOLOv3

Download MSCOCO 2017 dataset

Just run sh data/scripts/COCO2017.sh. You will get COCO train2017, val2017, test2017.

Train

VOC

python train.py -d voc --cuda -v [select a model] -hr -ms --ema

You can run python train.py -h to check all optional argument.

COCO

If you have only one gpu:

python train.py -d coco --cuda -v [select a model] -hr -ms --ema

If you have multi gpus like 8, and you put 4 images on each gpu:

python -m torch.distributed.launch --nproc_per_node=8 train.py -d coco --cuda -v [select a model] -hr -ms --ema \
                                                                        -dist \
                                                                        --sybn \
                                                                        --num_gpu 8\
                                                                        --batch_size 4

Test

VOC

python test.py -d voc --cuda -v [select a model] --trained_model [ Please input the path to model dir. ]

COCO

python test.py -d coco-val --cuda -v [select a model] --trained_model [ Please input the path to model dir. ]

Evaluation

VOC

python eval.py -d voc --cuda -v [select a model] --train_model [ Please input the path to model dir. ]

COCO

To run on COCO_val:

python eval.py -d coco-val --cuda -v [select a model] --train_model [ Please input the path to model dir. ]

To run on COCO_test-dev(You must be sure that you have downloaded test2017):

python eval.py -d coco-test --cuda -v [select a model] --train_model [ Please input the path to model dir. ]

You will get a .json file which can be evaluated on COCO test server.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages