Skip to content
forked from jchengai/planTF

[arXiv'2023] Rethinking Imitation-based Planner for Autonomous Driving

Notifications You must be signed in to change notification settings

Jiale-Fan/planTF

 
 

Repository files navigation

This is the official repository of

Rethink Imitation-based Planner for Autonomous Driving, Jie Cheng,Yingbing chen,Xiaodong Mei,Bowen Yang,Bo Li and Ming Liu, arXiv 2023

arXiv PDF

Highlight

  • A good starting point for research on learning-based planner on the nuPlan dataset. This repo provides detailed instructions on data preprocess, training and benchmark.
  • A simple pure learning-based baseline model planTF, that achieves decent performance without any rule-based strategies or post-optimization.

Get Started

Setup Environment

  • setup the nuPlan dataset following the offiical-doc
  • setup conda environment
conda create -n plantf python=3.9
conda activate plantf

# install nuplan-devkit
git clone https://github.com/motional/nuplan-devkit.git && cd nuplan-devkit
pip install -e .
pip install -r ./requirements.txt

# setup planTF
cd ..
git clone https://github.com/jchengai/planTF.git && cd planTF
sh ./script/setup_env.sh

Feature cache

Preprocess the dataset to accelerate training. The following command generates 1M frames of training data from the whole nuPlan training set. You may need:

  • change cache.cache_path to suit your condition
  • decrease/increase worker.threads_per_node depends on your RAM and CPU.
 export PYTHONPATH=$PYTHONPATH:$(pwd)

 python run_training.py \
    py_func=cache +training=train_planTF \
    scenario_builder=nuplan \
    cache.cache_path=/nuplan/exp/cache_plantf_1M \
    cache.cleanup_cache=true \
    scenario_filter=training_scenarios_1M \
    worker.threads_per_node=40

This process may take some time, be patient (20+hours in my setting).

Training

We modified the training scipt provided by nuplan-devkit a little bit for more flexible training. By default, the training script will use all visible GPUs for training. PlanTF is quite lightweight, which takes about 4~6G GPU memory under the batch size of 32 (each GPU).

CUDA_VISIBLE_DEVICES=0,1,2,3 python run_training.py \
  py_func=train +training=train_planTF \
  worker=single_machine_thread_pool worker.max_workers=32 \
  scenario_builder=nuplan cache.cache_path=/nuplan/exp/cache_plantf_1M cache.use_cache_without_dataset=true \
  data_loader.params.batch_size=32 data_loader.params.num_workers=32 \
  lr=1e-3 epochs=25 warmup_epochs=3 weight_decay=0.0001 \
  lightning.trainer.params.val_check_interval=0.5 \
  wandb.mode=online wandb.project=nuplan wandb.name=plantf

you can remove wandb related configurations if your prefer tensorboard.

Trained models

Place the trained models at planTF/checkpoints/

Model Document Download
PlanTF (state6+SDE) - OneDrive
RasterModel Doc OneDrive
UrbanDriver (openloop) Doc OneDrive

Evaluation

  • run a single scenario simulation (for sanity check): sh ./script/plantf_single_scenarios.sh
  • run Test14-random: sh ./script/plantf_benchmarks.sh test14-random
  • run Test14-hard: sh ./script/plantf_benchmarks.sh test14-hard
  • run Val14 (this may take a long time): sh ./script/plantf_benchmarks.sh val14

Results

Test14-random and Test14-hard benchmarks

Planners Test14-random Test14-hard
Type Method OLS↑ NR-CLS↑ R-CLS↑ OLS↑ NR-CLS↑ R-CLS↑ Time
Expert LogReplay 100.0 94.03 75.86 100.0 85.96 68.80 -
Rule-based IDM 34.15 70.39 72.42 20.07 56.16 62.26 32
PDM-Closed 46.32 90.05 91.64 26.43 65.07 75.18 140
Hybrid GameFormer 79.35 80.80 79.31 75.27 66.59 68.83 443
PDM-Hybrid 82.21 90.20 91.56 73.81 65.95 75.79 152
Learning-based

PlanCNN 62.93 69.66 67.54 52.4 49.47 52.16 82
UrbanDriver 82.44 63.27 61.02 76.9 51.54 49.07 124
GC-PGP 77.33 55.99 51.39 73.78 43.22 39.63 160
PDM-Open 84.14 52.80 57.23 79.06 33.51 35.83 101
PlanTF (Ours) 87.07 86.48 80.59 83.32 72.68 61.7 155

open-loop re-implementation

Val14 benchmark

Method OLS NR-CLS R-CLS
Log-replay 100 94 80
IDM 38 77 76
GC-PGP 82 57 54
PlanCNN 64 73 72
PDM-Hybrid 84 93 92
PlanTF (Ours) 89.18 84.83 76.78

Acknowledgements

Many thanks to the open-source community, also checkout these works:

Citation

If you find this repo useful, please consider giving us a star 🌟 and citing our related paper.

@misc{cheng2023plantf,
      title={Rethinking Imitation-based Planner for Autonomous Driving},
      author={Jie Cheng and Yingbing Chen and Xiaodong Mei and Bowen Yang and Bo Li and Ming Liu},
      year={2023},
      eprint={2309.10443},
      archivePrefix={arXiv},
      primaryClass={cs.RO}
}

About

[arXiv'2023] Rethinking Imitation-based Planner for Autonomous Driving

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.3%
  • Shell 1.7%