Skip to content

Distributed RL Implementation using Pytorch and Ray (ApeX(Ape-X), A3C, Distributed-PPO(DPPO), Impala)

License

Notifications You must be signed in to change notification settings

seolhokim/DistributedRL-Pytorch-Ray

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DistributedRL-Pytorch-Ray

Algorithm

  • A3C
  • DPPO
  • Ape-X
    • (Discrete version)
  • Impala

Tested Environment

Continuous

  • MountainCarContinuous-v0
  • Mujoco Benchmarks(Hopper,... etc)

Discrete

  • CartPole-v1
  • LunarLander-v2

TODO

Fix

  • Fix cuda environment clock time
  • Update Impala multi learner version
  • Check Ape-X performance
    • Performance does not go up in the middle.
  • Experiment distributed environment.
    • Implemented to use only one computer.

Add

  • add LASER
  • add R2D2
  • add NGU
  • add Agent57
  • test more environments

About

Distributed RL Implementation using Pytorch and Ray (ApeX(Ape-X), A3C, Distributed-PPO(DPPO), Impala)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages