Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Killed while evaluating #20

Open
ZhangTianrong opened this issue Nov 11, 2020 · 0 comments
Open

Killed while evaluating #20

ZhangTianrong opened this issue Nov 11, 2020 · 0 comments

Comments

@ZhangTianrong
Copy link

I changed the dataset from R2R to R4R which contains over 45k instructions in the val_unseen dataset. The training is killed when about 15k of them are evaluated. The machine I am using has 64 GB memory and a Tesla V100 graphic card. The batch_size is set to 8. I am not sure what is the bottleneck here. I guess it is the result dictionary that is taking up too much memory? Is it a good practice to release the memory every about 10k results?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant