Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[dataset] Pipeline task submission during reduce stage in push-based shuffle #25795

Merged
merged 13 commits into from
Jun 18, 2022

Conversation

stephanie-wang
Copy link
Contributor

Why are these changes needed?

Reduce stage in push-based shuffle fails to complete at 100k output partitions or more. This is likely because of driver or raylet load from having too many tasks in flight at once.

We can fix this from ray core too, but for now, this PR adds pipelining for the reduce stage, to limit the total number of reduce tasks in flight at the same time. This is currently set to 2 * available parallelism in the cluster. We have to pick which reduce tasks to submit carefully since these are pinned to specific nodes. The PR does this by assigning tasks round-robin according to the corresponding merge task (which get spread throughout the cluster).

In addition, this PR refactors the map, merge, and reduce stages to use a common pipelined iterator pattern, since they all have a similar pattern of submitting a round of tasks at a time, then waiting for a previous round to finish before submitting more.

Related issue number

Closes #25412.

Checks

  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

Adds a multinode test, but it would be nice to add a test that hooks into ray.remote and ray.get to check that we are actually submitting the right tasks at the right time.

@stephanie-wang
Copy link
Contributor Author

(also includes changes from #25734)

Copy link
Contributor

@ericl ericl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The key config here is num_merge_tasks_per_round, is that right? Given this is a temporary workaround, I think we should make this feature flagged, and add a TODO to remove this once we fix the core issue. 100k tasks should fit comfortably within our scalability envelope, so this is a bit odd.

@ericl
Copy link
Contributor

ericl commented Jun 15, 2022

It would also be great to have a unit test that things work with pipelining enabled/disabled, so we can easily remove it in the future.

@stephanie-wang
Copy link
Contributor Author

The key config here is num_merge_tasks_per_round, is that right? Given this is a temporary workaround, I think we should make this feature flagged, and add a TODO to remove this once we fix the core issue. 100k tasks should fit comfortably within our scalability envelope, so this is a bit odd.

Actually it's the reduce stage that is failing, so I think the numbers that matter are num rounds (= num map tasks / parallelism) * num reducers. Yeah, I also thought it was strange but I think it may have something to do with the fact that each reduce task also has a lot of plasma args, which we don't test in the scalability envelope. Fewer than in simple shuffle, but it's still on the order of 100s of args.

Copy link
Contributor

@ericl ericl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Feature flag lgtm... rubber stamping on the rest.

@ericl ericl added the @author-action-required The PR author is responsible for the next step. Remove tag to send back to the reviewer. label Jun 16, 2022
@stephanie-wang stephanie-wang merged commit 93aae48 into ray-project:master Jun 18, 2022
@stephanie-wang stephanie-wang deleted the pipelined-reduce branch June 18, 2022 00:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@author-action-required The PR author is responsible for the next step. Remove tag to send back to the reviewer.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[core] Scheduler stalls during shuffle reduce stage with 100k concurrent tasks or more
4 participants