-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: CUDA out of memory #11
Comments
Hi @pkuzqh , I've got another issue when running the code. |
If you want to change the batch size, you need to change the number in the dict "args". If you want to use multiple GPUs, you need to modify "model = nn.DataParallel(model, device_ids=[0, 1])". |
Hi @pkuzqh, thank you for the reply. The |
How many GPUs do you use? And the batch size? |
3, I indicated in the |
You need to change the number "4" in line 103-106 to a multiple of 3. And the batch size needs to be a multiple of 3. |
OK, thank you very much @pkuzqh ! It can now run. However, I saw in the |
you can use "nn.DataParallel" to use multiple gpus in testDefect4J.py |
Hi there, thank you very much for open-sourcing the work!
I wonder what devices you used for the work. Since I tried to run the training in a machine with 8 Tesla V100-SXM2-16GB, but cannot make it. Besides, I found the code would only utilize 2 GPUs, although I did not specify. I modified the device setting inside
run.py
, but still cannot change the fact that only 2 GPUs are used.Please kindly suggest. Thank you in advance!
The text was updated successfully, but these errors were encountered: