-
Notifications
You must be signed in to change notification settings - Fork 209
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Track array ownership to avoid illegal memory accesses #763
Comments
That's a CUDA limitation, nothing we can do about it. File it with NVIDIA instead 😄 |
Of course, we shouldn't be running into illegal memory accesses at all, CUDA.jl should be as safe to use as possible. In this case, we should probably be tracking which device owns an array. |
Yea, I think just doing a little check on the CUDA.jl size would be pretty useful. I suppose this is already tracked, right? Far from the cleanest, but findfirst(==(x.ctx), CUDA.__device_contexts)-1 does give you the device id that |
Note to self: it might be an idea to track the context in the buffer and disallow conversion to a pointer if the current context doesn't match the buffer's. |
This is implemented now. |
This is one thing that I think would greatly improve the interactive single process multi-GPU workflow. Right now if you accidentally trigger an illegal memory access (like say you just forgot that some variable in your session isn't on the GPU you currently have active), then it borks the whole session and you have to restart:
The text was updated successfully, but these errors were encountered: