Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding Hyperparameter Optimisation (HPO) #978

Open
2 of 8 tasks
bordeauxred opened this issue Oct 25, 2023 · 2 comments
Open
2 of 8 tasks

Adding Hyperparameter Optimisation (HPO) #978

bordeauxred opened this issue Oct 25, 2023 · 2 comments
Assignees
Labels
algorithm enhancement Not quite a new algorithm, but an enhancement to algo. functionality major Large changes that cannot or should not be broken down into smaller ones
Milestone

Comments

@bordeauxred
Copy link
Contributor

bordeauxred commented Oct 25, 2023

  • I have marked all applicable categories:
    • exception-raising bug
    • RL algorithm bug
    • documentation request (i.e. "X is missing from the documentation.")
    • new feature request
  • I have visited the source website
  • I have searched through the issue tracker for duplicates
  • I have mentioned version numbers, operating system and environment, where applicable:
    import tianshou, gymnasium as gym, torch, numpy, sys
    print(tianshou.__version__, gym.__version__, torch.__version__, numpy.__version__, sys.version, sys.platform)

A common task when using deep rl is to tune hyperparameters. While a lucky hand or grid search are always possible, more structured approaches are desirable and computationally preferable.
The recent paper Hyperparameters in Reinforcement Learning and How To Tune Them proposes an evaluation protocol for (hpo for) deep rl.

Often the result of rl experiments depends greatly on the selected seeds, with a high variance between seeds. The paper proposes as evaluation procedure to define and report disjoint sets of training and evaluation seeds. Each run (of plain rl or hpo+rl) is performed on a set of training seeds and evaluated on the set of test seeds.

A possible implementation strategy is to use hydra for the configuration of the search spaces (on top of the high level interfaces #970). This allows the combination with a) optuna hydra sweepers as well as b) the hpo sweepers from the aforementioned paper. We will contact the authors to integrate the sweepers from their repo which contains sweepers for:

Differential Evolution Hyperband
Standard Population Based Training (with warmstarting option)
Population Based Bandits (with Mix/Multi versions and warmstarting option)
Bayesian-Generational Population Based Training

@MischaPanch

@MischaPanch MischaPanch added the algorithm enhancement Not quite a new algorithm, but an enhancement to algo. functionality label Oct 25, 2023
@MischaPanch MischaPanch added this to the Release 1.0.0 milestone Oct 25, 2023
@MischaPanch MischaPanch added the major Large changes that cannot or should not be broken down into smaller ones label Oct 25, 2023
@MischaPanch
Copy link
Collaborator

@Trinkle23897 we plan to address it after the high-level interfaces from @opcode81 are merged. If you have any other proposals, would be happy to hear them!

Existing hpo approaches include:

  1. stable-baselines zoo, which is based on pure optuna (not through hydra sweepers) and has a sophisticated module for experiments

  2. NNI: @bordeauxred and I actually tried it and liked it, but it seems that the project is dead or at least stale. It's a shame... There are quite some bugs and documentation issues in the current version, and in case the development indeed came to a halt, it would be better not to rely on it.

Generally, from a quick look it seems like hydra sweepers are an attractive option, b/c they can be implemented on top of other hpo engines. For optuna there already is some support, and in case NNI is resurrected, it would probably be possible to make a new hydra sweeper based on it, if ever needed.

@MischaPanch
Copy link
Collaborator

We will do this in (at least) two stages. The first will be a proper test-evaluation protocol for a single params config. @bordeauxred is on it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
algorithm enhancement Not quite a new algorithm, but an enhancement to algo. functionality major Large changes that cannot or should not be broken down into smaller ones
Projects
Status: In progress
Development

No branches or pull requests

2 participants