Adding Hyperparameter Optimisation (HPO) #978
Labels
algorithm enhancement
Not quite a new algorithm, but an enhancement to algo. functionality
major
Large changes that cannot or should not be broken down into smaller ones
Milestone
A common task when using deep rl is to tune hyperparameters. While a lucky hand or grid search are always possible, more structured approaches are desirable and computationally preferable.
The recent paper Hyperparameters in Reinforcement Learning and How To Tune Them proposes an evaluation protocol for (hpo for) deep rl.
Often the result of rl experiments depends greatly on the selected seeds, with a high variance between seeds. The paper proposes as evaluation procedure to define and report disjoint sets of training and evaluation seeds. Each run (of plain rl or hpo+rl) is performed on a set of training seeds and evaluated on the set of test seeds.
A possible implementation strategy is to use hydra for the configuration of the search spaces (on top of the high level interfaces #970). This allows the combination with a) optuna hydra sweepers as well as b) the hpo sweepers from the aforementioned paper. We will contact the authors to integrate the sweepers from their repo which contains sweepers for:
Differential Evolution Hyperband
Standard Population Based Training (with warmstarting option)
Population Based Bandits (with Mix/Multi versions and warmstarting option)
Bayesian-Generational Population Based Training
@MischaPanch
The text was updated successfully, but these errors were encountered: