-
Notifications
You must be signed in to change notification settings - Fork 209
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluating sparse matrices in the REPL has a huge memory footprint #2016
Comments
We just convert the sparse GPU array to a sparse CPU array and use SparseArray.jl's output methods; there's nothing special about evaluation in the REPL. So if there's a surprising memory use, that should be reproducible by just calling these conversion methods. Anyway, I can't reproduce: julia> using CUDA, SparseArrays
julia> x = cu(sprand(10000, 10000, 0.01));
julia> CUDA.memory_status()
Effective GPU memory usage: 0.94% (459.625 MiB/47.504 GiB)
Memory pool usage: 7.660 MiB (32.000 MiB reserved)
julia> x
10000×10000 CuSparseMatrixCSC{Float32, Int32} with 998972 stored entries:
...
julia> CUDA.memory_status()
Effective GPU memory usage: 0.94% (459.625 MiB/47.504 GiB)
Memory pool usage: 7.660 MiB (32.000 MiB reserved) Please, when filing bugs, always include an actual reproducer, as suggested by the bug filing template. That template also asks about crucial information, like the CUDA.jl version, Julia version, etc. |
Going to close this for lack of response, but feel free to open a new issue with more details if you're still running into this on the latest version. |
Here's how to reproduce:
In my case, evaluating a ~ 12GB matrix in the REPL caused a ~ 28GB spike (12 + 28) in GPU memory usage. (Of course, the memory is released once the evaluation is done...)
The text was updated successfully, but these errors were encountered: