Skip to content
/ pluto Public

PLUTO: Push the Limit of Imitation Learning-based Planning for Autonomous Driving

Notifications You must be signed in to change notification settings

jchengai/pluto

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PLUTO

This is the official repository of

PLUTO: Push the Limit of Imitation Learning-based Planning for Autonomous Driving,

Jie Cheng, Yingbing Chen, and Qifeng Chen

arXiv PDF

Setup Environment

Setup dataset

Setup the nuPlan dataset following the offiical-doc

Setup conda environment

conda create -n pluto python=3.9
conda activate pluto

# install nuplan-devkit
git clone https://github.com/motional/nuplan-devkit.git && cd nuplan-devkit
pip install -e .
pip install -r ./requirements.txt

# setup pluto
cd ..
git clone https://github.com/jchengai/pluto.git && cd pluto
sh ./script/setup_env.sh

Feature Cache

Preprocess the dataset to accelerate training. It is recommended to run a small sanity check to make sure everything is correctly setup.

 python run_training.py \
    py_func=cache +training=train_pluto \
    scenario_builder=nuplan_mini \
    cache.cache_path=/nuplan/exp/sanity_check \
    cache.cleanup_cache=true \
    scenario_filter=training_scenarios_tiny \
    worker=sequential

Then preprocess the whole nuPlan training set (this will take some time). You may need to change cache.cache_path to suit your condition

 export PYTHONPATH=$PYTHONPATH:$(pwd)

 python run_training.py \
    py_func=cache +training=train_pluto \
    scenario_builder=nuplan \
    cache.cache_path=/nuplan/exp/cache_pluto_1M \
    cache.cleanup_cache=true \
    scenario_filter=training_scenarios_1M \
    worker.threads_per_node=40

Training

(The training part it not fully tested)

Same, it is recommended to run a sanity check first:

CUDA_VISIBLE_DEVICES=0 python run_training.py \
  py_func=train +training=train_pluto \
  worker=single_machine_thread_pool worker.max_workers=4 \
  scenario_builder=nuplan cache.cache_path=/nuplan/exp/sanity_check cache.use_cache_without_dataset=true \
  data_loader.params.batch_size=4 data_loader.params.num_workers=1

Training on the full dataset (without CIL):

CUDA_VISIBLE_DEVICES=0,1,2,3 python run_training.py \
  py_func=train +training=train_pluto \
  worker=single_machine_thread_pool worker.max_workers=32 \
  scenario_builder=nuplan cache.cache_path=/nuplan/exp/cache_pluto_1M cache.use_cache_without_dataset=true \
  data_loader.params.batch_size=32 data_loader.params.num_workers=16 \
  lr=1e-3 epochs=25 warmup_epochs=3 weight_decay=0.0001 \
  wandb.mode=online wandb.project=nuplan wandb.name=pluto
  • add option model.use_hidden_proj=true +custom_trainer.use_contrast_loss=true to enable CIL.

  • you can remove wandb related configurations if your prefer tensorboard.

Checkpoint

Download and place the checkpoint in the pluto/checkpoints folder.

Model Download
Pluto-1M-aux-cil OneDrive

Run Pluto-planner simulation

Run simulation for a random scenario in the nuPlan-mini split

sh ./script/run_pluto_planner.sh pluto_planner nuplan_mini mini_demo_scenario pluto_1M_aux_cil.ckpt /dir_to_save_the_simulation_result_video

The rendered simulation video will be saved to the specified directory (need change /dir_to_save_the_simulation_result_video).

To Do

The code is under cleaning and will be released gradually.

  • improve docs
  • training code
  • visualization
  • pluto-planner & checkpoint
  • feature builder & model
  • initial repo & paper

Citation

If you find this repo useful, please consider giving us a star 🌟 and citing our related paper.

@article{cheng2024pluto,
  title={PLUTO: Pushing the Limit of Imitation Learning-based Planning for Autonomous Driving},
  author={Cheng, Jie and Chen, Yingbing and Chen, Qifeng},
  journal={arXiv preprint arXiv:2404.14327},
  year={2024}
}

About

PLUTO: Push the Limit of Imitation Learning-based Planning for Autonomous Driving

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published