-
Notifications
You must be signed in to change notification settings - Fork 426
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
To use DistributedSampler or not? #1541
Comments
You want to use distributed samplers when using the multiprocessing API (or TPU Pods training) since they don't share memory. So yes that example is correct. Also yes, Not sure if that'd work since to shard the dataset, which is what distributed sampler is doing, it needs to know the length of the entire dataset in advance. Check this discussion out: pytorch/pytorch#28743. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
In pytorch-xla documentation: http:https://pytorch.org/xla/, it doesnt mention use of distributed sampler.
However, in the example : https://github.com/pytorch/xla/blob/master/test/test_train_mp_mnist.py , it says we should be using distributed samples.
xm.RateTracker() isnt mentioned in the documentation either.
Are both correct?
Also, Is there a way to use iterable datasets with distributed samplers?
The text was updated successfully, but these errors were encountered: