-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[data] Make sure the tf and tensor iteration work in dataset pipeline #34248
[data] Make sure the tf and tensor iteration work in dataset pipeline #34248
Conversation
…project#32493)" (ray-project#33485)" This reverts commit 5c79954.
…lineitertorchbatches
…lineitertorchbatches
it = ds.iterator() | ||
for _ in range(2): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By keeping using the DatasetPipeline.iter_batches()
, the consumption is not repeatable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is changing the actual behavior of PipelineDatasetIterator
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approved for picking into 2.4 branch
…lineitertorchbatches
@@ -99,13 +95,13 @@ def test_tf_e2e_pipeline(ray_start_regular_shared): | |||
ds = ray.data.range_table(5).repeat(2) | |||
it = ds.iterator() | |||
model = build_model() | |||
model.fit(it.to_tf("value", "value"), epochs=2) | |||
model.fit(it.to_tf("value", "value"), epochs=1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @amogkam
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like this is changing the actual behavior of PipelineDatasetIterator
which I don't think is what we want.
The behavior difference in DatasetPipeline
and PipelineDatasetIterator
was intentional. For PipelineDatasetIterator
, we want each call to iter
to only return a single epoch's worth of data, not the entire repeated dataset.
# Set prefetch_batches to default of 0 for DatasetPipeline. | ||
return super().iter_batches( | ||
prefetch_batches=prefetch_batches, | ||
yield from self._base_dataset_pipeline.iter_batches( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
isn't this changing the behavior of PipelineDatasetIterator
? Not using self._base_dataset_pipeline
was intentional so that each call to iter
would iterate over the next epoch.
The fix should just be to not directly call |
It doesn't seem this has fundamental difference than just changing the PipelinedDatasetIterator since this iterator is only used by those tf/torch APIs called from DatasetPipeline, right? |
Ah no, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice, thanks!
@@ -31,6 +31,13 @@ | |||
from ray.data.dataset import TensorFlowTensorBatchType | |||
|
|||
|
|||
def _is_tensor_dataset(schema) -> bool: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No-op change, just to make the to_tf
sharable to DatasetPipeline.
…lineitertorchbatches
Tests passed (failure not relevant). |
…ray-project#34248) * Revert "[Datasets] Revert "Enable streaming executor by default (ray-project#32493)" (ray-project#33485)" This reverts commit 5c79954. * make sure tf and tensor iteration in datapipeline work * Fix * fix * fix * fix * feedback * feedback * fix
…ray-project#34248) * Revert "[Datasets] Revert "Enable streaming executor by default (ray-project#32493)" (ray-project#33485)" This reverts commit 5c79954. * make sure tf and tensor iteration in datapipeline work * Fix * fix * fix * fix * feedback * feedback * fix Signed-off-by: elliottower <[email protected]>
…ray-project#34248) * Revert "[Datasets] Revert "Enable streaming executor by default (ray-project#32493)" (ray-project#33485)" This reverts commit 5c79954. * make sure tf and tensor iteration in datapipeline work * Fix * fix * fix * fix * feedback * feedback * fix Signed-off-by: Jack He <[email protected]>
Why are these changes needed?
With the new dataset iterator API, the iteration of tf and torch from DatasetPipeline is broken.
See: #33994
Related issue number
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/
under thecorresponding
.rst
file.