Skip to content
/ PPO Public

Minimal implementation of Proximal Policy Optimization (PPO) in PyTorch

Notifications You must be signed in to change notification settings

Ladun/PPO

Repository files navigation

PPO

Minimal implementation of Proximal Policy Optimization (PPO) in PyTorch

  • support discrete and continuous action space
    • In continuous action space, we use the constance std for sampling.
  • utils to plot learning graphs in tensorboard

Update

  • 2023-09-09
    • Update "Generative Adversarial Imitation Learning(GAIL)"

Train

Find or make a config file and run the following command.

python main.py --config=configs/Ant-v4.yaml 
               --exp_name=test
               --train

Make expert dataset for gail

python make_expert_dataset.py --experiment_path=checkpoints/Ant/test
                              --load_postfix=last
                              --minimum_score=5000
                              --n_episode=30

How to play

python main.py --experiment_path=checkpoints/Ant/test
               --eval
               --eval_n_episode=50
               --load_postfix=last
               --video_path=videos/Ant
  • load_path: pretrained model prefix(ex/ number of episode, 'best' or 'last') to play

Result

Ant-v4

Environment Performance Chart Evaluation Video
Ant-v4 Ant-v4 Performance
ant.mp4
Ant-v4
(GAIL)
Ant-v4 Performance
ant_gail.mp4
Reacher-v4 Ant-v4 Performance
reacher.mp4
HalfCheetah-v4 Ant-v4 Performance
cheetah.mp4

Reference