-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RLlib] Make learner group with non blocking update return multiple results #35858
[RLlib] Make learner group with non blocking update return multiple results #35858
Conversation
…esults Signed-off-by: Avnish <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think new API modifications are good, but can we clean up the learner_group a little bit? The names have become convoluted over time. I suggested some better names. Another suggestion:
update(block=True/False) --> update(async=True/False)
rllib/core/learner/learner_group.py
Outdated
for same tags. | ||
|
||
""" | ||
unprocessed_results = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not store the tagged results in a dict mapping from tag to list of results? This gives you O(1) on the read.
processed_results = defaultdict(list)
for result:
if ok:
processed_results[result.tag].append(result_or_error)
Then
for tag in processed_results:
self._inflight_request_tags.remove(tag)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can make this change but for the given amount of tags << 10000, this isn't going to make any significant difference to actual runtime. Since we aren't operating on a large amount of data here, I'll rewrite this in a way where its as easy to modify/maintain as possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all done!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think new API modifications are good, but can we clean up the learner_group a little bit? The names have become convoluted over time. I suggested some better names. Another suggestion:
update(block=True/False) --> update(async=True/False)
rllib/core/learner/learner_group.py
Outdated
@@ -176,7 +181,18 @@ def update( | |||
block: Whether to block until the update is complete. | |||
|
|||
Returns: | |||
A list of dictionaries of results from the updates from the Learner(s) | |||
if block is true and reduce_fn is None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need the API description to be higher-level and simpler to understand. It's explaining the code right now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should just break this into 2 functions instead:
update, and async_update. Multiple return types is weird.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok let’s do that instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add a TODO then also for the reduce_fn
. We have different return types here, too, because of this.
Result reduction should happen on the training_step side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sven1977 done
Signed-off-by: Avnish <[email protected]>
Signed-off-by: Avnish <[email protected]>
Signed-off-by: Avnish <[email protected]>
Signed-off-by: Avnish <[email protected]>
The goal of this PR is merely fixing the test of learner_group, but learner_group's API surface is still not clean and can be simplified in another iteration, similar to what we did for Learner. Approving the merge for now as it's blocking CI's green-ness. |
"If the queue is full itwill evict the oldest batch first." should be "If the queue is full it will evict the oldest batch first." |
…esults (ray-project#35858) Signed-off-by: Avnish <[email protected]>
…esults (ray-project#35858) Signed-off-by: Avnish <[email protected]> Signed-off-by: e428265 <[email protected]>
Signed-off-by: Avnish [email protected]
The results aggregation of the learner was broken becasue we upped the amount of async requests for the learner. This change should fix that problem.
Why are these changes needed?
Related issue number
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/
under thecorresponding
.rst
file.