Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve cross-device usage #2323

Closed
CarloLucibello opened this issue Apr 15, 2024 · 5 comments · Fixed by #2335
Closed

Improve cross-device usage #2323

CarloLucibello opened this issue Apr 15, 2024 · 5 comments · Fixed by #2335
Labels
cuda array Stuff about CuArray. enhancement New feature or request

Comments

@CarloLucibello
Copy link
Contributor

I discovered this very surprising behavior where performing operations between arrays on a device produces an array on a different device and then I also get an error:

julia> using CUDA

julia> CUDA.device!(0)
CuDevice(0): NVIDIA TITAN RTX

julia> x = CUDA.ones(2)
2-element CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}:
 1.0
 1.0

julia> CUDA.device!(1)
CuDevice(1): NVIDIA TITAN RTX

julia> x
2-element CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}:
 1.0
 1.0

julia> y = x - x;

julia> CUDA.device(x)
CuDevice(0): NVIDIA TITAN RTX

julia> CUDA.device(y)
CuDevice(1): NVIDIA TITAN RTX

julia> y
2-element CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}:
Error showing value of type CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}:
ERROR: CUDA error: an illegal memory access was encountered (code 700, ERROR_ILLEGAL_ADDRESS)
Stacktrace:
  [1] throw_api_error(res::CUDA.cudaError_enum)
    @ CUDA ~/.julia/packages/CUDA/fGE8R/lib/cudadrv/libcuda.jl:30
  [2] isdone
    @ ~/.julia/packages/CUDA/fGE8R/lib/cudadrv/stream.jl:111 [inlined]
  [3] spinning_synchronization(f::typeof(CUDA.isdone), obj::CuStream)
    @ CUDA ~/.julia/packages/CUDA/fGE8R/lib/cudadrv/synchronization.jl:79
  [4] synchronize(stream::CuStream; blocking::Bool, spin::Bool)
    @ CUDA ~/.julia/packages/CUDA/fGE8R/lib/cudadrv/synchronization.jl:196
  [5] synchronize (repeats 2 times)
    @ ~/.julia/packages/CUDA/fGE8R/lib/cudadrv/synchronization.jl:194 [inlined]
  [6] (::CUDA.var"#1098#1099"{Float32, Vector{Float32}, Int64, CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}, Int64, Int64})()
    @ CUDA ~/.julia/packages/CUDA/fGE8R/src/array.jl:606
  [7] #context!#954
    @ ~/.julia/packages/CUDA/fGE8R/lib/cudadrv/state.jl:170 [inlined]
  [8] context!
    @ ~/.julia/packages/CUDA/fGE8R/lib/cudadrv/state.jl:165 [inlined]
  [9] unsafe_copyto!(dest::Vector{Float32}, doffs::Int64, src::CuArray{Float32, 1, CUDA.Mem.DeviceBuffer}, soffs::Int64, n::Int64)
    @ CUDA ~/.julia/packages/CUDA/fGE8R/src/array.jl:602
 [10] copyto!
    @ ~/.julia/packages/CUDA/fGE8R/src/array.jl:555 [inlined]
 [11] copyto!
    @ ~/.julia/packages/CUDA/fGE8R/src/array.jl:559 [inlined]
 [12] copyto_axcheck!
    @ ./abstractarray.jl:1177 [inlined]
 [13] Vector{Float32}(x::CuArray{Float32, 1, CUDA.Mem.DeviceBuffer})
    @ Base ./array.jl:673
 [14] Array
    @ ./boot.jl:500 [inlined]
 [15] convert
    @ ./array.jl:665 [inlined]
 [16] adapt_storage
    @ ~/.julia/packages/GPUArrays/OKkAu/src/host/abstractarray.jl:115 [inlined]
 [17] adapt_structure
    @ ~/.julia/packages/Adapt/7T9au/src/Adapt.jl:57 [inlined]
 [18] adapt
    @ ~/.julia/packages/Adapt/7T9au/src/Adapt.jl:40 [inlined]
 [19] print_array
    @ ~/.julia/packages/GPUArrays/OKkAu/src/host/abstractarray.jl:118 [inlined]
 [20] show(io::IOContext{Base.TTY}, ::MIME{Symbol("text/plain")}, X::CuArray{Float32, 1, CUDA.Mem.DeviceBuffer})
    @ Base ./arrayshow.jl:399
julia> CUDA.versioninfo()
CUDA runtime 12.4, artifact installation
CUDA driver 12.0
NVIDIA driver 525.89.2

CUDA libraries: 
- CUBLAS: 12.4.5
- CURAND: 10.3.5
- CUFFT: 11.2.1
- CUSOLVER: 11.6.1
- CUSPARSE: 12.3.1
- CUPTI: 22.0.0
- NVML: 12.0.0+525.89.2

Julia packages: 
- CUDA: 5.3.0
- CUDA_Driver_jll: 0.8.1+0
- CUDA_Runtime_jll: 0.12.1+0

Toolchain:
- Julia: 1.10.2
- LLVM: 15.0.7

3 devices:
  0: NVIDIA TITAN RTX (sm_75, 13.265 GiB / 24.000 GiB available)
  1: NVIDIA TITAN RTX (sm_75, 23.447 GiB / 24.000 GiB available)
  2: NVIDIA TITAN RTX (sm_75, 23.445 GiB / 24.000 GiB available)
@maleadt
Copy link
Member

maleadt commented Apr 15, 2024

That is expected; memory allocated on one device cannot be simply accessed by another. You need unified memory for that, or a mapped host buffer.

The fact that explicit copy operations (as used by the I/O stack) still work, is unrelated. In fact, you can safely copy between arrays on different devices, which will use an appropriate mechanism (stage through CPU, or P2P copy).

@maleadt maleadt closed this as not planned Won't fix, can't repro, duplicate, stale Apr 15, 2024
@CarloLucibello
Copy link
Contributor Author

I don't want to access memory on one device from another. I just want operation on one device to give output on the same device, irrespective of current_device. I just want the following behavior from pytorch

import torch

device0 = torch.device("cuda:0")
device1 = torch.device("cuda:1")
x0 = torch.zeros(2, device=device0)
x1 = torch.ones(2, device=device1)

print(x0)      # tensor([0., 0.], device='cuda:0')
print(x0 - x0) # tensor([0., 0.], device='cuda:0')

print(x1)      # tensor([1., 1.], device='cuda:1')
print(x1 - x1) # tensor([0., 0.], device='cuda:1')

Can this be obtained with CUDA.jl?

@maleadt
Copy link
Member

maleadt commented Apr 17, 2024

That is not how our model work. We follow the CUDA programming model, where switching devices is a global operation affecting where the computation happens, whereas in Torch arrays are owned by a device. That is just a different approach which comes with its own set of trade-offs.

@maleadt
Copy link
Member

maleadt commented Apr 21, 2024

Actually, I think we can improve this by either erroring early, or using P2P to enable cross-device usage. Note however that I still want to keep the fact that we execute on the current active device vs. the one an array was allocated on.

@maleadt maleadt reopened this Apr 21, 2024
@maleadt maleadt changed the title error when switching device Improve cross-device usage Apr 21, 2024
@maleadt
Copy link
Member

maleadt commented Apr 22, 2024

This now works on #2335:

julia> using CUDA

julia> CUDA.device!(0)
CuDevice(0): Tesla V100-PCIE-16GB

julia> x = CUDA.ones(2)

2-element CuArray{Float32, 1, CUDA.DeviceMemory}:
 1.0
 1.0

julia> CUDA.device!(1)
CuDevice(1): Tesla V100S-PCIE-32GB

julia> x
2-element CuArray{Float32, 1, CUDA.DeviceMemory}:
 1.0
 1.0

julia> y = x - x;

julia> CUDA.device(x)
CuDevice(0): Tesla V100-PCIE-16GB

julia> CUDA.device(y)
CuDevice(1): Tesla V100S-PCIE-32GB

julia> y
2-element CuArray{Float32, 1, CUDA.DeviceMemory}:
 0.0
 0.0

However, do note that we keep with the semantics that we're executing on the globally active device. Which means that you may run into the following if your devices are inaccessible to one another:

julia> CUDA.device!(0)
CuDevice(0): Tesla V100-PCIE-16GB

julia> x = CUDA.ones(2)
2-element CuArray{Float32, 1, CUDA.DeviceMemory}:
 1.0
 1.0

julia> CUDA.device!(3)
CuDevice(3): Tesla P100-PCIE-16GB

julia> x
2-element CuArray{Float32, 1, CUDA.DeviceMemory}:
 1.0
 1.0

julia> y = x - x;
ERROR: ArgumentError: cannot take the GPU address of inaccessible device memory.

You are trying to use memory from GPU 0 while executing on GPU 3.
P2P access between these devices is not possible; either switch execution to GPU 0
by calling `CUDA.device!(0)`, or copy the data to an array allocated on device 3.

@maleadt maleadt added enhancement New feature or request cuda array Stuff about CuArray. labels Apr 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cuda array Stuff about CuArray. enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants