Skip to content

Latest commit

 

History

History
166 lines (136 loc) · 6.37 KB

GETTING_STARTED.md

File metadata and controls

166 lines (136 loc) · 6.37 KB

Getting Started

The dataset configs are located within tools/cfgs/dataset_configs, and the model configs are located within tools/cfgs for different datasets.

Dataset Preparation

Currently we provide the dataloader of KITTI dataset and NuScenes dataset, and the supporting of more datasets are on the way.

KITTI Dataset

  • Please download the official KITTI 3D object detection dataset and organize the downloaded files as follows (the road planes could be downloaded from [road plane], which are optional for data augmentation in the training):
  • NOTE: if you already have the data infos from pcdet v0.1, you can choose to use the old infos and set the DATABASE_WITH_FAKELIDAR option in tools/cfgs/dataset_configs/kitti_dataset.yaml as True. The second choice is that you can create the infos and gt database again and leave the config unchanged.
OpenPCDet
├── data
│   ├── kitti
│   │   │── ImageSets
│   │   │── training
│   │   │   ├──calib & velodyne & label_2 & image_2 & (optional: planes)
│   │   │── testing
│   │   │   ├──calib & velodyne & image_2
├── pcdet
├── tools
  • Generate the data infos by running the following command:
python -m pcdet.datasets.kitti.kitti_beam_id --data_path data/kitti
python -m pcdet.datasets.kitti.kitti_dataset create_kitti_infos tools/cfgs/dataset_configs/kitti_dataset.yaml

NuScenes Dataset

OpenPCDet
├── data
│   ├── nuscenes
│   │   │── v1.0-trainval (or v1.0-mini if you use mini)
│   │   │   │── samples
│   │   │   │── sweeps
│   │   │   │── maps
│   │   │   │── v1.0-trainval  
├── pcdet
├── tools
  • Install the nuscenes-devkit with version 1.0.5 by running the following command:
pip install nuscenes-devkit==1.0.5
  • Generate the data infos by running the following command (it may take several hours):
python -m pcdet.datasets.nuscenes.nuscenes_dataset --func create_nuscenes_infos \ 
    --cfg_file tools/cfgs/dataset_configs/nuscenes_dataset.yaml \
    --version v1.0-trainval

Waymo Open Dataset

  • Please download the official Waymo Open Dataset, including the training data training_0000.tar~training_0031.tar and the validation data validation_0000.tar~validation_0007.tar.
  • Unzip all the above xxxx.tar files to the directory of data/waymo/raw_data as follows (You could get 798 train tfrecord and 202 val tfrecord ):
OpenPCDet
├── data
│   ├── waymo
│   │   │── ImageSets
│   │   │── raw_data
│   │   │   │── segment-xxxxxxxx.tfrecord
|   |   |   |── ...
|   |   |── waymo_processed_data
│   │   │   │── segment-xxxxxxxx/
|   |   |   |── ...
│   │   │── pcdet_gt_database_train_sampled_xx/
│   │   │── pcdet_waymo_dbinfos_train_sampled_xx.pkl   
├── pcdet
├── tools
  • Install the official waymo-open-dataset by running the following command:
pip3 install --upgrade pip
# tf 2.0.0
pip3 install waymo-open-dataset-tf-2-0-0==1.2.0 --user
  • Extract point cloud data from tfrecord and generate data infos by running the following command (it takes several hours, and you could refer to data/waymo/waymo_processed_data to see how many records that have been processed):
python -m pcdet.datasets.waymo.waymo_dataset --func create_waymo_infos \
    --cfg_file tools/cfgs/dataset_configs/waymo_dataset.yaml

Note that you do not need to install waymo-open-dataset if you have already processed the data before and do not need to evaluate with official Waymo Metrics.

Training & Testing

Test and evaluate the pretrained models

  • Test with a pretrained model:
python test.py --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE} --ckpt ${CKPT}
  • To test all the saved checkpoints of a specific training setting and draw the performance curve on the Tensorboard, add the --eval_all argument:
python test.py --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE} --eval_all
  • Notice that if you want to test on the setting with KITTI as target domain, please add --set DATA_CONFIG_TAR.FOV_POINTS_ONLY True to enable front view point cloud only:
python test.py --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE} --eval_all --set DATA_CONFIG_TAR.FOV_POINTS_ONLY True
  • To test with multiple GPUs:
sh scripts/dist_test.sh ${NUM_GPUS} \
    --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE}

# or

sh scripts/slurm_test_mgpu.sh ${PARTITION} ${NUM_GPUS} \ 
    --cfg_file ${CONFIG_FILE} --batch_size ${BATCH_SIZE}

Train a model

You could optionally add extra command line parameters --batch_size ${BATCH_SIZE} and --epochs ${EPOCHS} to specify your preferred parameters.

  • Train with multiple GPUs or multiple machines
sh scripts/dist_train.sh ${NUM_GPUS} --cfg_file ${CONFIG_FILE}

# or 

sh scripts/slurm_train.sh ${PARTITION} ${JOB_NAME} ${NUM_GPUS} --cfg_file ${CONFIG_FILE}
  • Train with a single GPU:
python train.py --cfg_file ${CONFIG_FILE}

Train the Pre-trained

Take Source Only model with SECOND-IoU on Waymo -> KITTI as an example:

sh scripts/dist_train.sh ${NUM_GPUS} --cfg_file cfgs/da-waymo-kitti_models/secondiou/secondiou_rbd.yaml \
    --batch_size ${BATCH_SIZE}

Notice that you need to select the best model as your Pre-train model, because the performance of adapted model is really unstable when target domain is KITTI.

Self-training Process

You need to set the --pretrained_model ${PRETRAINED_MODEL} when finish the following self-training process.

sh scripts/dist_train.sh ${NUM_GPUS} --cfg_file cfgs/da-waymo-kitti_models/secondiou_dts/dts.yaml \
    --batch_size ${BATCH_SIZE} --pretrained_model ${PRETRAINED_MODEL}

Notice that you also need to focus the performance of the best model.