Skip to content

Inverse RL algorithms (APP, MaxEnt, GAIL, VAIL)

Notifications You must be signed in to change notification settings

koliaok/lets-do-irl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Let's do Inverse RL

Introduction

This repository contains PyTorch (v0.4.1) implementations of inverse reinforcement learning (IRL) algorithms.

  • Apprenticeship Learning via Inverse Reinforcement Learning [2]
  • Maximum Entropy Inverse Reinforcement Learning [4]
  • Generative Adversarial Imitation Learning [5]
  • Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow [6]

We have implemented and trained the agents with the IRL algorithms using the following environments.

For reference, reviews of below papers related to IRL (in Korean) are located in Let's do Inverse RL Guide.

  • [1] AY. Ng, et al., "Algorithms for Inverse Reinforcement Learning", ICML 2000.
  • [2] P. Abbeel, et al., "Apprenticeship Learning via Inverse Reinforcement Learning", ICML 2004.
  • [3] ND. Ratliff, et al., "Maximum Margin Planning", ICML 2006.
  • [4] BD. Ziebart, et al., "Maximum Entropy Inverse Reinforcement Learning", AAAI 2008.
  • [5] J. Ho, et al., "Generative Adversarial Imitation Learning", NIPS 2016.
  • [6] XB. Peng, et al., "Variational Discriminator Bottleneck. Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow", ICLR 2019.

Table of Contents

Mountain car

We have implemented APP, MaxEnt using Q-learning as RL step in MountainCar-v0 environment.

1. Information

2. Train

If you want to use APP, Navigate to lets-do-irl/mountaincar/app folder.

If you want to use MaxEnt instead of APP, Navigate to lets-do-irl/mountaincar/maxent folder.

Basic Usage

Train the agent wtih APP, MaxEnt without rendering.

python main.py

Test the pretrained model

If you want to test APP, Test the agent with the saved model app_q_table.npy in app/results folder.

If you want to test Maxent instead of APP, Test the agent with the saved model maxent_q_table.npy in maxent/results folder.

python test.py

3. Trained Agent

We have trained the agents with two different IRL algortihms using MountainCar-v0 environment.

Algorithm Score / Eps GIF
APP app
MaxEnt maxent

Mujoco Hopper

We have implemented GAIL, VAIL using PPO as RL step in Hopper-v2 environment.

1. Installation

2. Train

If you want to use GAIL, Navigate to lets-do-irl/mujoco/gail folder.

If you want to use VAIL instead of GAIL, Navigate to lets-do-irl/mujoco/vail folder.

Basic Usage

Train the agent wtih GAIL, VAIL without rendering.

python main.py
  • env: Ant-v2, HalfCheetah-v2, Hopper-v2(default), Humanoid-v2, HumanoidStandup-v2, InvertedPendulum-v2, Reacher-v2, Swimmer-v2, Walker2d-v2

Continue training from the saved checkpoint

python main.py --load_model ckpt_4000.pth.tar
  • Note that ckpt_4000.pth.tar file should be in the lets-do-irl/mujoco/save_model folder.

Test the pretrained model

Test the agent with the saved model ckpt_4000.pth.tar in gail/save_model folder.

python test.py --load_model ckpt_4000.pth.tar --iter 5

Or, Test the agent with the saved model ckpt_4000.pth.tar in vail/save_model folder.

python test.py --load_model ckpt_4000.pth.tar --iter 5

3. Tensorboard

Note that the results of trainings are automatically saved in logs folder. TensorboardX is the Tensorboard-like visualization tool for Pytorch.

Navigate to the lets-do-irl/mujoco/gail or lets-do-irl/mujoco/vail folder.

tensorboard --logdir logs

4. Trained Agent

We have trained the agents with two different IRL algortihms using Hopper-v2 environment.

Algorithm Score / Episodes GIF
PPO (to compare) ppo
GAIL gail
VAIL vail

Reference

We referenced the codes from below repositories.

About

Inverse RL algorithms (APP, MaxEnt, GAIL, VAIL)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages

  • Python 100.0%