-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding Hyperparameter Optimisation (HPO) #978
Comments
@Trinkle23897 we plan to address it after the high-level interfaces from @opcode81 are merged. If you have any other proposals, would be happy to hear them! Existing hpo approaches include:
Generally, from a quick look it seems like hydra sweepers are an attractive option, b/c they can be implemented on top of other hpo engines. For optuna there already is some support, and in case NNI is resurrected, it would probably be possible to make a new hydra sweeper based on it, if ever needed. |
We will do this in (at least) two stages. The first will be a proper test-evaluation protocol for a single params config. @bordeauxred is on it |
A common task when using deep rl is to tune hyperparameters. While a lucky hand or grid search are always possible, more structured approaches are desirable and computationally preferable.
The recent paper Hyperparameters in Reinforcement Learning and How To Tune Them proposes an evaluation protocol for (hpo for) deep rl.
Often the result of rl experiments depends greatly on the selected seeds, with a high variance between seeds. The paper proposes as evaluation procedure to define and report disjoint sets of training and evaluation seeds. Each run (of plain rl or hpo+rl) is performed on a set of training seeds and evaluated on the set of test seeds.
A possible implementation strategy is to use hydra for the configuration of the search spaces (on top of the high level interfaces #970). This allows the combination with a) optuna hydra sweepers as well as b) the hpo sweepers from the aforementioned paper. We will contact the authors to integrate the sweepers from their repo which contains sweepers for:
Differential Evolution Hyperband
Standard Population Based Training (with warmstarting option)
Population Based Bandits (with Mix/Multi versions and warmstarting option)
Bayesian-Generational Population Based Training
@MischaPanch
The text was updated successfully, but these errors were encountered: