Waiting for video loading, or download the mp4 file directly ...
We recommend using cuda11.8 to avoid unnecessary environmental problems.
conda create -y -n deoe python=3.11
conda activate deoe
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip install wandb pandas plotly opencv-python tabulate pycocotools bbox-visualizer StrEnum hydra-core einops
torchdata tqdm numba h5py hdf5plugin lovely-tensors tensorboardX pykeops scikit-learn ipdb timm opencv-python-headless
pytorch_lightning==1.8.6 numpy==1.26.3
We recommend using DSEC-Detection training and evaluation first (about 2 days), since 1 Mpx usually takes a long time to train (about 10 days) if you only have a single GPU.
You can download the processed DSEC-Detection by clicking here.
You can get the raw GEN4 in RVT. And get the processed data by following the Instruction proposed by RVT. Note that to keep the labels for all the classes following here.
DSEC-Detection | GEN4 | |
---|---|---|
Pre-trained checkpoints | download | download |
AUC-Unknown | 25.1 | 23.5 |
Set DATASET
= dsec
or gen4
.
Set DATADIR
= path to the DSEC-Detection or 1 Mpx dataset directory.
Set CHECKPOINT
= path to the checkpoint used for evaluation.
python validation.py dataset={DATASET} dataset.path={DATADIR} checkpoint={CHECKPOINT} +experiment/{DATASET}='base.yaml'
The batchsize, lr, and the other hyperparameters could be adjusted in file config\experiments\dataset\base.yaml
.
Set the testing_classes
to full categories in file config\dataset\dataset.yaml
.
Set the unseen_classes
to the categories evaluated as the unknown categories in file config\dataset\dataset.yaml
.
The first results outpute by the console are the results for unseen classes, while the second is for testing classes (generally full categories).
python compute_auc.py
Set DATASET
= dsec
or gen4
.
Set DATADIR
= path to the DSEC-Detection or 1 Mpx dataset directory.
python train.py dataset={DATASET} dataset.path={DATADIR} +experiment/{DATASET}='base.yaml'
The batchsize, lr, and the other hyperparameters could be adjusted in file config\experiments\dataset\base.yaml
.
Set DATASET
= dsec
or gen4
.
Set CHECKPOINT
= path to the checkpoint used for evaluation.
Set h5_file
= path to files used for visualization like h5_file = /DSEC_process/val/zurich_city_15_a
.
python demo.py dataset={DATASET} checkpoint={CHECKPOINT} +experiment/{DATASET}='base.yaml'
Then the output images and video will be saved in folder DEOE\prediction
.
If you find our work is helpful, please considering cite us.
@article{zhang2024detecting,
title={Detecting Every Object from Events},
author={Zhang, Haitian and Xu, Chang and Wang, Xinya and Liu, Bingde and Hua, Guang and Yu, Lei and Yang, Wen},
journal={arXiv preprint arXiv:2404.05285},
year={2024}
}