Skip to content

Reinforcement learning using kernel-based function approximation

License

Notifications You must be signed in to change notification settings

katetolstaya/kernelrl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Kernel Reinforcement Learning

Dependencies

  • Python 2 or 3
  • OpenAI Gym, version 0.11.0
  • SciPy
  • MatPlotLib

Available algorithms

  • Kernel Q-Learning with:

    • Continuous states / discrete actions
    • Continuous states and actions from ACC 2018
  • Kernel Normalized Advantage Functions in continuous action spaces from IROS 2018

To run

Kernel Q-Learning with Pendulum with prioritized experience replay

python rlcore.py cfg/kq_pendulum_per.cfg

Kernel NAF with Continuous Mountain Car

python rlcore.py cfg/knaf_mcar.cfg

Other options of configuration files are

  • Kernel Q-Learning for Cont. Mountain Car: cfg/kq_cont_mcar.cfg
  • Kernel Q-Learning for Pendulum: cfg/kq_pendulum.cfg
  • Kernel Q-Learning for discrete-action Cartpole: cfg/kq_cartpole.cfg
  • Kernel NAF for Pendulum: cfg/knaf_pendulum.cfg

Composing policies

The compose folder contains the code for composing two or more trained policies as described in the IROS 2018 paper.

Tuning parameters

To tune learning rates and other parameters, adjust the corresponding parameters in the .cfg file.

Contributors

This software was created by Ekaterina Tolstaya, Ethan Stump, and Garrett Warnell.