Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RLlib] Make learner group with non blocking update return multiple results #35858

Merged

Conversation

avnishn
Copy link
Member

@avnishn avnishn commented May 29, 2023

Signed-off-by: Avnish [email protected]

The results aggregation of the learner was broken becasue we upped the amount of async requests for the learner. This change should fix that problem.

Why are these changes needed?

Related issue number

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

Copy link
Contributor

@kouroshHakha kouroshHakha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think new API modifications are good, but can we clean up the learner_group a little bit? The names have become convoluted over time. I suggested some better names. Another suggestion:

update(block=True/False) --> update(async=True/False)

rllib/core/learner/learner_group.py Outdated Show resolved Hide resolved
rllib/core/learner/learner_group.py Outdated Show resolved Hide resolved
rllib/core/learner/learner_group.py Outdated Show resolved Hide resolved
rllib/core/learner/learner_group.py Outdated Show resolved Hide resolved
for same tags.

"""
unprocessed_results = []
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not store the tagged results in a dict mapping from tag to list of results? This gives you O(1) on the read.

processed_results = defaultdict(list)
for result:
     if ok:
           processed_results[result.tag].append(result_or_error)

Then

for tag in processed_results:
      self._inflight_request_tags.remove(tag)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can make this change but for the given amount of tags << 10000, this isn't going to make any significant difference to actual runtime. Since we aren't operating on a large amount of data here, I'll rewrite this in a way where its as easy to modify/maintain as possible.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all done!

rllib/core/learner/learner_group.py Outdated Show resolved Hide resolved
rllib/core/learner/learner_group.py Outdated Show resolved Hide resolved
Copy link
Contributor

@kouroshHakha kouroshHakha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think new API modifications are good, but can we clean up the learner_group a little bit? The names have become convoluted over time. I suggested some better names. Another suggestion:

update(block=True/False) --> update(async=True/False)

@@ -176,7 +181,18 @@ def update(
block: Whether to block until the update is complete.

Returns:
A list of dictionaries of results from the updates from the Learner(s)
if block is true and reduce_fn is None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need the API description to be higher-level and simpler to understand. It's explaining the code right now.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should just break this into 2 functions instead:

update, and async_update. Multiple return types is weird.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok let’s do that instead.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add a TODO then also for the reduce_fn. We have different return types here, too, because of this.
Result reduction should happen on the training_step side.

Copy link
Member Author

@avnishn avnishn May 30, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sven1977 done

@kouroshHakha
Copy link
Contributor

The goal of this PR is merely fixing the test of learner_group, but learner_group's API surface is still not clean and can be simplified in another iteration, similar to what we did for Learner. Approving the merge for now as it's blocking CI's green-ness.

@kouroshHakha kouroshHakha merged commit 7d52c2f into ray-project:master May 30, 2023
2 checks passed
@ollie-iterators
Copy link

"If the queue is full itwill evict the oldest batch first." should be "If the queue is full it will evict the oldest batch first."

scv119 pushed a commit to scv119/ray that referenced this pull request Jun 16, 2023
arvind-chandra pushed a commit to lmco/ray that referenced this pull request Aug 31, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants