-
Notifications
You must be signed in to change notification settings - Fork 209
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve cross-device usage #2323
Comments
That is expected; memory allocated on one device cannot be simply accessed by another. You need unified memory for that, or a mapped host buffer. The fact that explicit copy operations (as used by the I/O stack) still work, is unrelated. In fact, you can safely copy between arrays on different devices, which will use an appropriate mechanism (stage through CPU, or P2P copy). |
I don't want to access memory on one device from another. I just want operation on one device to give output on the same device, irrespective of current_device. I just want the following behavior from pytorch import torch
device0 = torch.device("cuda:0")
device1 = torch.device("cuda:1")
x0 = torch.zeros(2, device=device0)
x1 = torch.ones(2, device=device1)
print(x0) # tensor([0., 0.], device='cuda:0')
print(x0 - x0) # tensor([0., 0.], device='cuda:0')
print(x1) # tensor([1., 1.], device='cuda:1')
print(x1 - x1) # tensor([0., 0.], device='cuda:1') Can this be obtained with CUDA.jl? |
That is not how our model work. We follow the CUDA programming model, where switching devices is a global operation affecting where the computation happens, whereas in Torch arrays are owned by a device. That is just a different approach which comes with its own set of trade-offs. |
Actually, I think we can improve this by either erroring early, or using P2P to enable cross-device usage. Note however that I still want to keep the fact that we execute on the current active device vs. the one an array was allocated on. |
This now works on #2335: julia> using CUDA
julia> CUDA.device!(0)
CuDevice(0): Tesla V100-PCIE-16GB
julia> x = CUDA.ones(2)
2-element CuArray{Float32, 1, CUDA.DeviceMemory}:
1.0
1.0
julia> CUDA.device!(1)
CuDevice(1): Tesla V100S-PCIE-32GB
julia> x
2-element CuArray{Float32, 1, CUDA.DeviceMemory}:
1.0
1.0
julia> y = x - x;
julia> CUDA.device(x)
CuDevice(0): Tesla V100-PCIE-16GB
julia> CUDA.device(y)
CuDevice(1): Tesla V100S-PCIE-32GB
julia> y
2-element CuArray{Float32, 1, CUDA.DeviceMemory}:
0.0
0.0 However, do note that we keep with the semantics that we're executing on the globally active device. Which means that you may run into the following if your devices are inaccessible to one another: julia> CUDA.device!(0)
CuDevice(0): Tesla V100-PCIE-16GB
julia> x = CUDA.ones(2)
2-element CuArray{Float32, 1, CUDA.DeviceMemory}:
1.0
1.0
julia> CUDA.device!(3)
CuDevice(3): Tesla P100-PCIE-16GB
julia> x
2-element CuArray{Float32, 1, CUDA.DeviceMemory}:
1.0
1.0
julia> y = x - x;
ERROR: ArgumentError: cannot take the GPU address of inaccessible device memory.
You are trying to use memory from GPU 0 while executing on GPU 3.
P2P access between these devices is not possible; either switch execution to GPU 0
by calling `CUDA.device!(0)`, or copy the data to an array allocated on device 3. |
I discovered this very surprising behavior where performing operations between arrays on a device produces an array on a different device and then I also get an error:
The text was updated successfully, but these errors were encountered: