Skip to content

edouardklein/RL-and-IRL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Source code for Inverse Reinforcement Learning

You’ll find in this repo the code for three reinforcement learning algorithms :

  • LSTD$μ$
  • SCIRL
  • CSI

the description of which can be found on my research page.

Only SCIRL has a somewhat good, heavily commented implementation. It can be found in tutorials/Exp7.py.

I intend to implement those algorithms properly as a part of some well-known machine learning library. When this is done I will destroy this repo.

In the meantime, feel free to try to make sense of all this. Please don’t hesitate to contact me if you have any question.

Exp1.py : CSI on the inverted pendulum, parameters can be played with.

Exp2.py : Finding out in which areas of the state space of the inverted pendulum the expert is good.

Exp3.py : Trying to find what to plot with CSI on the inverted pendulum.

Exp4.py : Testing different parameters for LSPI and CSI on the Mountain Car.

Exp5.py : Running all algos (SCRIL x2, SCI, Classif, RE) on the Mountain Car.

Exp6.py : Evaluating the policies found in Exp5.

Exp7.py : Running SCIRL on the Mountain Car

Exp8.py : Relative entropy on the mountain car.

Exp9.py : Cascading on the data from Asterix

Exp10.py : Relative Entropy on the Highway

Exp11.py : Plotting the results of different IRL algos on the mountain car

Exp12.py : Running SCIRL on the Highway

Exp13.py : Running CSI on the Highway

About

C implementation of RL and IRL algorithms

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published