This repository contains PyTorch (v0.4.1) implementations of inverse reinforcement learning (IRL) algorithms.
- Apprenticeship Learning via Inverse Reinforcement Learning [2]
- Maximum Entropy Inverse Reinforcement Learning [4]
- Generative Adversarial Imitation Learning [5]
- Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow [6]
We have implemented and trained the agents with the IRL algorithms using the following environments.
For reference, reviews of below papers related to IRL (in Korean) are located in Let's do Inverse RL Guide.
- [1] AY. Ng, et al., "Algorithms for Inverse Reinforcement Learning", ICML 2000.
- [2] P. Abbeel, et al., "Apprenticeship Learning via Inverse Reinforcement Learning", ICML 2004.
- [3] ND. Ratliff, et al., "Maximum Margin Planning", ICML 2006.
- [4] BD. Ziebart, et al., "Maximum Entropy Inverse Reinforcement Learning", AAAI 2008.
- [5] J. Ho, et al., "Generative Adversarial Imitation Learning", NIPS 2016.
- [6] XB. Peng, et al., "Variational Discriminator Bottleneck. Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow", ICLR 2019.
We have implemented APP
, MaxEnt
using Q-learning as RL step in MountainCar-v0
environment.
If you want to use APP
, Navigate to lets-do-irl/mountaincar/app
folder.
If you want to use MaxEnt
instead of APP
, Navigate to lets-do-irl/mountaincar/maxent
folder.
Train the agent wtih APP
, MaxEnt
without rendering.
python main.py
If you want to test APP
, Test the agent with the saved model app_q_table.npy
in app/results
folder.
If you want to test Maxent
instead of APP
, Test the agent with the saved model maxent_q_table.npy
in maxent/results
folder.
python test.py
We have trained the agents with two different IRL algortihms using MountainCar-v0
environment.
Algorithm | Score / Eps | GIF |
---|---|---|
APP | ||
MaxEnt |
We have implemented GAIL
, VAIL
using PPO as RL step in Hopper-v2
environment.
If you want to use GAIL
, Navigate to lets-do-irl/mujoco/gail
folder.
If you want to use VAIL
instead of GAIL
, Navigate to lets-do-irl/mujoco/vail
folder.
Train the agent wtih GAIL
, VAIL
without rendering.
python main.py
- env: Ant-v2, HalfCheetah-v2, Hopper-v2(default), Humanoid-v2, HumanoidStandup-v2, InvertedPendulum-v2, Reacher-v2, Swimmer-v2, Walker2d-v2
python main.py --load_model ckpt_4000.pth.tar
- Note that
ckpt_4000.pth.tar
file should be in thelets-do-irl/mujoco/save_model
folder.
Test the agent with the saved model ckpt_4000.pth.tar
in gail/save_model
folder.
python test.py --load_model ckpt_4000.pth.tar --iter 5
Or, Test the agent with the saved model ckpt_4000.pth.tar
in vail/save_model
folder.
python test.py --load_model ckpt_4000.pth.tar --iter 5
Note that the results of trainings are automatically saved in logs
folder. TensorboardX is the Tensorboard-like visualization tool for Pytorch.
Navigate to the lets-do-irl/mujoco/gail
or lets-do-irl/mujoco/vail
folder.
tensorboard --logdir logs
We have trained the agents with two different IRL algortihms using Hopper-v2
environment.
Algorithm | Score / Episodes | GIF |
---|---|---|
PPO (to compare) | ||
GAIL | ||
VAIL |
We referenced the codes from below repositories.