Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Data] Fix actor pool scale up logic to consider min_workers #34093

Merged
merged 1 commit into from
Apr 5, 2023

Conversation

c21
Copy link
Contributor

@c21 c21 commented Apr 5, 2023

Why are these changes needed?

This is to fix the actor pool scale up logic, to consider min_workers (minimal required number of actors).
Previously we don't consider it, so num_total_workers == 0 could happen and lead to ZeroDivisionError: division by zero.

Example stack trace:

  File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/data/_internal/torch_iterable_dataset.py", line 18, in __iter__
    yield from it
  File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/data/dataset_iterator.py", line 570, in make_generator
    for batch in self.iter_batches(
  File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/data/dataset_iterator.py", line 156, in iter_batches
    block_iterator, stats = self._to_block_iterator()
  File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/data/_internal/dataset_iterator/dataset_iterator_impl.py", line 31, in _to_block_iterator
    block_iterator, stats, executor = ds._plan.execute_to_iterator()
  File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/data/_internal/plan.py", line 530, in execute_to_iterator
    block_iter = itertools.chain([next(gen)], gen)
  File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/data/_internal/execution/legacy_compat.py", line 49, in execute_to_legacy_block_iterator
    for bundle in bundle_iter:
  File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/data/_internal/execution/interfaces.py", line 465, in __next__
    return self.get_next()
  File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/data/_internal/execution/streaming_executor.py", line 116, in get_next
    raise item
  File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/data/_internal/execution/streaming_executor.py", line 163, in run
    while self._scheduling_loop_step(self._topology) and not self._shutdown:
  File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/data/_internal/execution/streaming_executor.py", line 217, in _scheduling_loop_step
    op = select_operator_to_run(
  File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/data/_internal/execution/streaming_executor_state.py", line 362, in select_operator_to_run
    _try_to_scale_up_cluster(topology, execution_id)
  File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/data/_internal/execution/streaming_executor_state.py", line 419, in _try_to_scale_up_cluster
    per_task_resource = op.incremental_resource_usage()
  File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/data/_internal/execution/operators/actor_pool_map_operator.py", line 262, in incremental_resource_usage
    if self._autoscaling_policy.should_scale_up(
  File "/home/ray/anaconda3/lib/python3.9/site-packages/ray/data/_internal/execution/operators/actor_pool_map_operator.py", line 422, in should_scale_up
    and num_running_workers / num_total_workers
ZeroDivisionError: division by zero

Related issue number

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

num_total_workers = 0
num_running_workers = 0
# Should scale up since under pool min workers.
assert policy.should_scale_up(num_total_workers, num_running_workers)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Verified this test throws exception without the fix.

@c21 c21 merged commit eba0bfc into ray-project:master Apr 5, 2023
@c21 c21 added the tests-ok The tagger certifies test failures are unrelated and assumes personal liability. label Apr 5, 2023
@c21 c21 deleted the fix-zero branch April 5, 2023 21:26
@c21
Copy link
Contributor Author

c21 commented Apr 5, 2023

Merged to master. Checked the failed tests are irrelevant.

ArturNiederfahrenhorst pushed a commit to ArturNiederfahrenhorst/ray that referenced this pull request Apr 10, 2023
…ject#34093)

This is to fix the actor pool scale up logic, to consider min_workers (minimal required number of actors).
Previously we don't consider it, so `num_total_workers == 0` could happen and lead to `ZeroDivisionError: division by zero`.

Signed-off-by: Cheng Su <[email protected]>
elliottower pushed a commit to elliottower/ray that referenced this pull request Apr 22, 2023
…ject#34093)

This is to fix the actor pool scale up logic, to consider min_workers (minimal required number of actors).
Previously we don't consider it, so `num_total_workers == 0` could happen and lead to `ZeroDivisionError: division by zero`.

Signed-off-by: Cheng Su <[email protected]>
Signed-off-by: elliottower <[email protected]>
ProjectsByJackHe pushed a commit to ProjectsByJackHe/ray that referenced this pull request May 4, 2023
…ject#34093)

This is to fix the actor pool scale up logic, to consider min_workers (minimal required number of actors).
Previously we don't consider it, so `num_total_workers == 0` could happen and lead to `ZeroDivisionError: division by zero`.

Signed-off-by: Cheng Su <[email protected]>
Signed-off-by: Jack He <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
tests-ok The tagger certifies test failures are unrelated and assumes personal liability.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants