Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Out of memory #17

Open
josvanr opened this issue Apr 1, 2022 · 1 comment
Open

Out of memory #17

josvanr opened this issue Apr 1, 2022 · 1 comment

Comments

@josvanr
Copy link

josvanr commented Apr 1, 2022

Hello!

I'm trying to run your script but run into memory problems. I'm not sure how to tackle this; I tried to use a smaller sample image, but even at size 100x50 I still run out of memory. Maybe it is not related to the image size?.. The error messages I get are quoted below, and the numbers there don't seem to be related to the image size. If I look at the amount of gpu memory used as the script runs, it starts at more or less zero and increases to the max of 2048Mb then the script exits..

Any suggestions are welcome!

thnx.

Test Data Num: 1
Load: BiFuse_Pretrained.pkl
Traceback (most recent call last):
File "main.py", line 115, in
main()
File "main.py", line 111, in main
saver.LoadLatestModel(model, None)
File "/sda1/bifuse/BiFuse/Utils/ModelSaver.py", line 33, in LoadLatestModel
params = torch.load(name)
File "/home/jos/.local/lib/python3.6/site-packages/torch/serialization.py", line 608, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/jos/.local/lib/python3.6/site-packages/torch/serialization.py", line 787, in _legacy_load
result = unpickler.load()
File "/home/jos/.local/lib/python3.6/site-packages/torch/serialization.py", line 743, in persistent_load
deserialized_objects[root_key] = restore_location(obj, location)
File "/home/jos/.local/lib/python3.6/site-packages/torch/serialization.py", line 175, in default_restore_location
result = fn(storage, location)
File "/home/jos/.local/lib/python3.6/site-packages/torch/serialization.py", line 155, in _cuda_deserialize
return storage_type(obj.size())
File "/home/jos/.local/lib/python3.6/site-packages/torch/cuda/init.py", line 606, in _lazy_new
return super(_CudaBase, cls).new(cls, *args, **kwargs)
RuntimeError: CUDA out of memory. Tried to allocate 14.00 MiB (GPU 0; 1.96 GiB total capacity; 1.25 GiB already allocated; 15.56 MiB free; 1.28 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

@josvanr
Copy link
Author

josvanr commented Apr 1, 2022

Following the suggestion of an earlier thread, I did get it to work without gpu (using cpu only)...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant