Skip to content

A clean and robust Pytorch implementation of SAC on discrete action space

Notifications You must be signed in to change notification settings

XinJingHao/SAC-Discrete-Pytorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SAC-Discrete-Pytorch

This is a clean and robust Pytorch implementation of Soft-Actor-Critic on discrete action space

All the experiments are trained with same hyperparameters. Other RL algorithms by Pytorch can be found here.

Dependencies

gymnasium==0.29.1
numpy==1.26.1
pytorch==2.1.0

python==3.11.5

How to use my code

Train from scratch

python main.py

where the default enviroment is 'CartPole'.

Play with trained model

python main.py --EnvIdex 0 --render True --Loadmodel True --ModelIdex 50

which will render the 'CartPole'.

Change Enviroment

If you want to train on different enviroments

python main.py --EnvIdex 1

The --EnvIdex can be set to be 0 and 1, where

'--EnvIdex 0' for 'CartPole-v1'  
'--EnvIdex 1' for 'LunarLander-v2'   

Note: if you want train on LunarLander-v2, you need to install box2d-py first. You can install box2d-py via:

pip install gymnasium[box2d]

Visualize the training curve

You can use the tensorboard to record anv visualize the training curve.

  • Installation (please make sure Pytorch is installed already):
pip install tensorboard
pip install packaging
  • Record (the training curves will be saved at '\runs'):
python main.py --write True
  • Visualization:
tensorboard --logdir runs

Hyperparameter Setting

For more details of Hyperparameter Setting, please check 'main.py'

References

Christodoulou P. Soft actor-critic for discrete action settings[J]. arXiv preprint arXiv:1910.07207, 2019.

Haarnoja T, Zhou A, Abbeel P, et al. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor[C]//International conference on machine learning. PMLR, 2018: 1861-1870.

Haarnoja T, Zhou A, Hartikainen K, et al. Soft actor-critic algorithms and applications[J]. arXiv preprint arXiv:1812.05905, 2018.

About

A clean and robust Pytorch implementation of SAC on discrete action space

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages