Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RLlib] Learner API: Fix and unify grad-clipping configs and behaviors. #34464

Merged

Conversation

sven1977
Copy link
Contributor

@sven1977 sven1977 commented Apr 17, 2023

Learner API: Fix and unify grad-clipping configs and behaviors.

  • Introduce new AlgorithmConfig setting: grad_clip_by, which can be set to "value", "norm" or "global_norm" and determines the mode of clipping.
  • Made grad_clip a generic AlgorithmConfig property (was only supported by some algos before). However, this setting is only used if _enable_learner_api=True.
  • Implement proper clipping behaviors based on the 3 new modes in Tf- and TorchLearner (postprocess_gradients method).
  • Add test cases for these new behaviors.

Why are these changes needed?

Related issue number

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Copy link
Member

@avnishn avnishn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for sharing this. However, I am overall confused by this pr. Is it incomplete?

) for k, v in gradients_dict.items()
}

# Clip by L2-norm (per gradient tensor).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are we going to allow users to clip gradients by value and then by norm? that doesn't sound right to me, but that is the behavior that has been enabled here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great point. Was thinking about this myself. Maybe we should just do: grad_clip: [some value] and then grad_clip_by: [value|norm|global_norm].


# TODO (sven): Deprecate grad_clip setting once all-in on new Learner API.
self.grad_clip = 10.0
self.grad_clip_by_global_norm = 10.0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how is this going to be used downstream by the qmix policy / multi_gpu_train_one_batch.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed by replacing the new settings with backward-compatible ones

Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
@sven1977
Copy link
Contributor Author

Hey @avnishn , sorry about the confusion and thanks for taking a look!

I went through the PR once more and your comments and addressed all of them.
I changed the configs to the backward-compatible pair of grad_clip and (new) grad_clip_by, which specifies the mode of clipping. This way, it's a) backward compatible and b) exclusive (only one way of clipping at the same time allowed).

@@ -422,16 +425,6 @@ def validate(self) -> None:
self.vtrace_clip_pg_rho_threshold
)

@override(AlgorithmConfig)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not needed anymore with this PR. grad clipping has been universally moved into Learner.postprocess_gradients().

@@ -79,6 +79,22 @@ def compute_gradients(

return grads

@override(Learner)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was missing in torch thus far.

@@ -27,6 +27,48 @@
tf1, tf, tfv = try_import_tf()


@PublicAPI
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New grad clip utilities, replacing the old, messy ones (some of the old ones do clipping by value, some by norm (not global norm), some have additional optimizer update logic in them, etc..

@sven1977 sven1977 added the tests-ok The tagger certifies test failures are unrelated and assumes personal liability. label Apr 26, 2023
@sven1977 sven1977 merged commit 25a5bcb into ray-project:master Apr 27, 2023
ProjectsByJackHe pushed a commit to ProjectsByJackHe/ray that referenced this pull request May 4, 2023
architkulkarni pushed a commit to architkulkarni/ray that referenced this pull request May 16, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
tests-ok The tagger certifies test failures are unrelated and assumes personal liability.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants