-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Learn.py takes all GPU memory #612
Comments
I dont think the problem is with tensorflow or the ml-agents (when i start training it uses about 20 mb vram) You should check if your game is doing what you think it does. If you have some kind of memory leak you have to remember than the game is played 100 times as fast as normal, so that might amplify the problem. |
Hi @r-lipton, is this using one of our sample environments or your own? Generally, as @Hengoo and @MarcoMeter pointed out, we haven't noticed this on our environments. |
I have seen this problem with OpenAI.Baselines when invoking a 2nd training run. Setting replace trainer_controller.py line 212
Update: I tested this today and was able to run multiple training runs concurrently on a single GPU |
It's using my own created environment. |
Hi all. I've made a PR for this, and it will be added to the v0.5 release. #1192 |
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
Hello,
When I run a training using
learn.py
, the process allocated all the memory of the GPU.Is there a way to avoid this, and make it takes only what it needs?
Thanks
The text was updated successfully, but these errors were encountered: