Benchmarking Evolutionary Reinforcement Learning (pronounced "barrel"... sort of)
A collaborative project for aggregating benchmarks of evolutionary algorithms on common reinforcement learning benchmarks, based on Cambrian.jl.
Contribution guidelines are available here.
Algorithms:
- NEAT
- CGP
Environments:
- Iris classification
- XOR
- Gym classic control
- Atari on RAM
Algorithms:
- HyperNEAT (2xfeedforward & recurrent ANNs)
- CMA-ES (2xfeedforward & recurrent ANNs)
- population-based REINFORCE
- AGRN
- TPG
- grammatical evolution
Environments:
- mujoco
- pybullet
- mario
Customizable fitness:
- sum of reward over an episode
- Novelty?
- MAP-Elites
CLI interaction
- Parseable arguments
Non-Cambrian algorithms
- Interaction through a simplified interface with the BERL environments
To run a selection of algorithms on BERL benchmarks, please:
- Toggle the algorithms and environments you want in the YAML config files.
- Run
run_berl()
To only run a pair of algorithm and environment, you can also use:
start_berl(algo_name::String, env_name::String; env_params...)
env_params
are the specific game names (such as "CartPole-v1") when "atari" or "gym" are selected as env_name
.