Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reducing download size #106

Open
marionbartl opened this issue Feb 17, 2023 · 0 comments
Open

Reducing download size #106

marionbartl opened this issue Feb 17, 2023 · 0 comments

Comments

@marionbartl
Copy link

Hi! I would like to create a subset of the pile that is ~5G in size. The final subset should follow the original distribution of datasets and the documents included should be randomly sampled from the datasets.

I tried to work with the --limit, --read_amount, and --make_dataset_samples parameters to reduce the download size, but when I run the script, each dataset is downloaded in the original size.

I would greatly appreciate it if you could tell me whether what I'm looking for is achievable with this repo and what the command for that would be.

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant