Skip to content

Combining "segment-anything" with MOT, it create the era of "MOTS"

License

Notifications You must be signed in to change notification settings

BingfengYan/VISAM

Repository files navigation

MOTRv2: Bootstrapping End-to-End Multi-Object Tracking by Pretrained Object Detectors

This fork from https://github.com/megvii-research/MOTRv2 MOTRv2, and after we will release our code CO-MOT.

Main Results

DanceTrack

HOTA DetA AssA MOTA IDF1 URL
69.9 83.0 59.0 91.9 71.7 model

Visualization

|VISAM| ||

Installation

The codebase is built on top of Deformable DETR and MOTR.

Requirements

  • Install pytorch using conda (optional)

    conda create -n motrv2 python=3.9
    conda activate motrv2
    conda install pytorch==1.12.0 torchvision==0.13.0 torchaudio==0.12.0 cudatoolkit=11.3 -c pytorch
  • Other requirements

    pip install -r requirements.txt
  • Build MultiScaleDeformableAttention

    cd ./models/ops
    sh ./make.sh

Usage

Dataset preparation

  1. Download YOLOX detection from here.
  2. Please download DanceTrack and CrowdHuman and unzip them as follows:
/data/Dataset/mot
├── crowdhuman
│   ├── annotation_train.odgt
│   ├── annotation_trainval.odgt
│   ├── annotation_val.odgt
│   └── Images
├── DanceTrack
│   ├── test
│   ├── train
│   └── val
├── det_db_motrv2.json

You may use the following command for generating crowdhuman trainval annotation:

cat annotation_train.odgt annotation_val.odgt > annotation_trainval.odgt

Training

You may download the coco pretrained weight from Deformable DETR (+ iterative bounding box refinement), and modify the --pretrained argument to the path of the weight. Then training MOTR on 8 GPUs as following:

./tools/train.sh configs/motrv2.args

Inference on DanceTrack Test Set

  1. Download SAM weigth fro ViT-H SAM model
  2. run
# run a simple inference on our pretrained weights
./tools/simple_inference.sh ./motrv2_dancetrack.pth

# Or evaluate an experiment run
# ./tools/eval.sh exps/motrv2/run1

# then zip the results
zip motrv2.zip tracker/ -r

if you want run on yourself data, please get detection results from ByteTrackInference firstly.

Acknowledgements

About

Combining "segment-anything" with MOT, it create the era of "MOTS"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published