This repository contains PyTorch implementations of meta-reinforcement learning algorithms.
This repository is implemented and verified on python 3.8.8
To run on pytorch 1.8.1, enter the pytorch version link and run the installation command to desired specifications.
Next, clone this repository and run the following command.
$ make setup
The repository's high-level structure is:
└── src
├── envs
├── rl2
├── algorithm
├── configs
└── results
├── maml
├── algorithm
├── configs
└── results
└── pearl
├── algorithm
├── configs
└── results
TBU
TBU
TBU
We have setup automatic formatters and linters for this repository.
To run the formatters:
$ make format
To run the linters:
$ make lint
New code should pass the formatters and the linters before being submitted as a PR.
Thanks goes to these wonderful people (emoji key):
Dongmin Lee 💻 📖 | Seunghyun Lee 💻 |
This project follows the all-contributors specification. Contributions of any kind welcome!