Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow arrays larger than 4GB on GPUs #1638

Closed
BA8F0D39 opened this issue Apr 25, 2023 · 13 comments
Closed

Allow arrays larger than 4GB on GPUs #1638

BA8F0D39 opened this issue Apr 25, 2023 · 13 comments
Assignees
Labels
enhancement A feature or an optimization request

Comments

@BA8F0D39
Copy link

Summary

Allocating an array larger than 4GB on Intel Arc A770 16GB crashes or gives garbage results.

Allocating an array larger than 4GB on Intel CPUs is perfectly fine.

Version

Collecting environment information...
PyTorch version: 1.13.0a0+gitb1dde16
PyTorch CXX11 ABI: Yes
IPEX version: 1.13.10+xpu
IPEX commit: 7d85b0e92
Build type: Release

OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: N/A
IGC version: N/A
CMake version: N/A
Libc version: glibc-2.35

Python version: 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0] (64-bit runtime)
Python platform: Linux-6.3.0-1-x86_64-with-glibc2.35
Is XPU available: True
DPCPP runtime version: N/A
MKL version: N/A
GPU models and configuration: 
[0] _DeviceProperties(name='Intel(R) Graphics [0x56a0]', platform_name='Intel(R) Level-Zero', dev_type='gpu, support_fp64=0, total_memory=15473MB, max_compute_units=512)
Intel OpenCL ICD version: 22.43.24595.35+i538~22.04
Level Zero version: 1.3.24595.35+i538~22.04

CPU:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Address sizes:                   46 bits physical, 48 bits virtual
Byte Order:                      Little Endian
CPU(s):                          20
On-line CPU(s) list:             0-19
Vendor ID:                       GenuineIntel
BIOS Vendor ID:                  Intel(R) Corporation
Model name:                      13th Gen Intel(R) Core(TM) i5-13600K
BIOS Model name:                 13th Gen Intel(R) Core(TM) i5-13600K
CPU family:                      6
Model:                           183
Thread(s) per core:              2
Core(s) per socket:              14
Socket(s):                       1
Stepping:                        1
CPU max MHz:                     5100.0000
CPU min MHz:                     800.0000
BogoMIPS:                        6991.00
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr ibt flush_l1d arch_capabilities
Virtualization:                  VT-x
L1d cache:                       544 KiB (14 instances)
L1i cache:                       704 KiB (14 instances)
L2 cache:                        20 MiB (8 instances)
L3 cache:                        24 MiB (1 instance)
NUMA node(s):                    1
NUMA node0 CPU(s):               0-19
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Mmio stale data:   Not affected
Vulnerability Retbleed:          Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected

Versions of relevant libraries:
[pip3] intel-extension-for-pytorch==1.13.10+xpu
[pip3] numpy==1.24.1
[pip3] torch==1.13.0a0+gitb1dde16
[pip3] torchvision==0.14.1a0+0504df5
[conda] N/A

Expected behavior

Example of allocating less than 4GB in A770 16GB. The mean is around 0.5 which is expected.

import torch
import torchvision.models as models

import numpy as np
import intel_extension_for_pytorch as ipex

torch.manual_seed(0)

x = torch.rand(30000, 30000, dtype=torch.float32, device='xpu')

print("Mean")
print(torch.mean(x).detach().cpu().numpy())


python3 ./test.py 
 Failed to load image Python extension: 
  warn(f"Failed to load image Python extension: {e}")
Mean
0.50001085

Example of allocating more than 4GB on CPU. The mean is around 0.5 which is expected.

import torch
import torchvision.models as models

import numpy as np
import intel_extension_for_pytorch as ipex

torch.manual_seed(0)

x = torch.rand(47000, 47000, dtype=torch.float32, device='cpu')

print("Mean")
print(torch.mean(x).detach().cpu().numpy())



python3 ./test.py 
/usr/local/lib/python3.10/dist-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: 
  warn(f"Failed to load image Python extension: {e}")
Mean
0.4999941

Example of allocating more than 4GB on A770 16GB. The mean is around 0.014 which is completely wrong.

import torch
import torchvision.models as models

import numpy as np
import intel_extension_for_pytorch as ipex

torch.manual_seed(0)

x = torch.rand(47000, 47000, dtype=torch.float32, device='xpu')

print("Mean")
print(torch.mean(x).detach().cpu().numpy())


python3 ./test.py 
/usr/local/lib/python3.10/dist-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: 
  warn(f"Failed to load image Python extension: {e}")
Mean
0.014004011

In conclusion, allocating more than 4GB crashes or returns complete garbage.

@BA8F0D39 BA8F0D39 added the sighting Suspicious library behavior. Should be promoted to a bug when confirmed label Apr 25, 2023
@vpirogov
Copy link
Member

The issue as reported is specific to Intel Extension for Pytorch and tracked in intel/intel-extension-for-pytorch#325.

Let me check though whether oneDNN has any issues with buffers over 4 Gb.

Relevant links:

@igorsafo
Copy link
Contributor

@vpirogov There was a commit that enabled this feature, but we reverted it due to multiple issues:

@vpirogov
Copy link
Member

Looking at the user guide there's a bit more required than adding an OpenCL flag. Hopefully the driver stack evolved since last time we tried this.

@rjoursler
Copy link
Contributor

@vpirogov, @igorsafo It looks like further work is also required for the some of the OpenCL implementations as well. A number of the OpenCL implementations use the int type for offset calculations as there is no native int64 arithmetic on some GPUs. We will need to correct those calculations.

@vpirogov vpirogov added enhancement A feature or an optimization request and removed sighting Suspicious library behavior. Should be promoted to a bug when confirmed labels Apr 26, 2023
@BA8F0D39
Copy link
Author

BA8F0D39 commented Apr 26, 2023

Is it possible to add new environmental variables into OneDNN to enable -cl-intel-greater-than-4GB-buffer-required ?

Also, is allocation done on OpenCL or Level Zero?
Level Zero also has flags you have to enable.

@vpirogov
Copy link
Member

vpirogov commented Apr 26, 2023

@BA8F0D39, adding OpenCL flag alone will not solve the problem, as OpenCL is only a part of oneDNN codebase. The main programming model is SYCL and allocation are done via SYCL API. I still need to find out what the story here.

Also @rjoursler pointed that there may be other issues related to buffers over 4 Gb.

@gujinghui
Copy link

@BA8F0D39, pls refer to the comments in intel/intel-extension-for-pytorch#325.

@vpirogov vpirogov self-assigned this Apr 28, 2023
@simonlui
Copy link

simonlui commented Aug 23, 2023

@vpirogov I was affected by the same issue. So I did some digging, and boy did it take a long time, but I think I have the gist of the story here. The short answer is that currently no, there's currently no way to pass anything from SYCL to Level Zero or OpenGL in terms of the flags mentioned here so anything from Intel that only uses SYCL and not specifically Level Zero or OpenCL can't do it. But it does seem theoretically possible. Say if you want to use malloc in oneDNN for a graph, for example. That malloc seems to be at oneDNN/src/graph/utils/allocator.hpp and uses sycl::aligned_alloc_shared. The Khronos documentation here for SYCL seems to specify a property_list parameter for this function so so we can try and see what happens to it going through everything.

The chain goes like this: malloc() -> llvm/sycl/source/detail/usm/usm_impl.cpp from aligned_alloc_shared() to alignedAlloc() to alignedAllocInternal() -> llvm//sycl/plugins/unified_runtime/pi2ur.hpp at piextUSMSharedAlloc() -> llvm/sycl/plugins/unified_runtime/ur/adapters/level_zero/usm.cpp from urUSMSharedAlloc() to finally USMSharedAllocImpl(). Note that the last two calls in the chain is dependent on backend but I chose to follow the Level Zero backend. The property list seems to be read in urUSMSharedAlloc and USMSharedAllocImpl does get to use it after it gets read where in this example, you can see a read-only flag parameter being used in this fashion. So it should be possible. However, the issue is here that there's no provision or way to pass any of the over 4GB flags so you can get the over 4GB behavior wanted at the moment today in SYCL. Similarly, this is an issue with other calls like the non-CPU specific sycl::aligned_alloc_device call that is used for GPUs, FPGAs and etc. which also does the same thing with the equivalent OpenCL backend which affects Intel Extension for Pytorch which is where I am affected also by this issue. This seems to be the core problem. Not sure what kind of standardization or changes will be needed here so a possible over 4GB will survive and get passed down this chain of calls. This seems like a much bigger issue than the scope of this project unfortunately. I will be opening a corresponding bug report in Intel's LLVM repository to address it directly where I think the change needs to happen first and foremost. But getting a fix properly to propagate everywhere will probably take a while for everything to actually align. I do hope it gets prioritized but can understand why it will be difficult to do so. I do hope when any downstream changes lands that the appropriate changes can be made in oneDNN to fix this.

@vpirogov
Copy link
Member

Thanks for sharing, @simonlui. From our investigation it looks like the issue is in lack of hardware support for 64-bit int arithmetic in current generation of GPUs. This makes working with buffers exceeding 4GB impractical from performance perspective and complicated from software implementation perspective.

@simonlui
Copy link

Hi @vpirogov, if you can confirm, this is for just 1 single allocation, not in aggregate, correct? It's unfortunate to hear of this restriction but it doesn't make much sense to be given a way to opt out if this was the case. Is it more that it's not practical to do inside the compute runtime/driver?

@vpirogov
Copy link
Member

Right, the 4 Gb limit applies only to a single buffer size. You still can use all the memory on the GPU as long a single allocation does not exceed 4 Gb. The 'opt-out' is available in driver and OpenCL compiler, but it comes with non-trivial performance impact and non-production quality status. Additionally oneDNN has it's own code generator for performance critical functions (like matmul or convolution), which does not have int64 address math emulation.

@BA8F0D39
Copy link
Author

BA8F0D39 commented Aug 31, 2023

@vpirogov
Why is int64 addressing required for GPUs?
For example, if you want to take the mean of a 8GB array. Can't OneDNN split the 8GB array into two 4GB arrays and take the mean of the individual 4GB arrays? Most machine learning programmers split their arrays across multiple GPUs for large datasets. GPU doesn't necessarily require int64 addressing. Only the CPU requires int64 addressing. Precision does not matter for A770 because it only has fp32 and int32 at the most.

@uniartisan
Copy link

Given that IPEX issue #325 is still under discussion, this issue should not be closed prematurely. Nowadays, some large language models may already have parameter sizes exceeding 4GB. There exists a case where a model or an individual sequence is larger than 4GB, making it impossible to split into parts smaller than 4GB and then recombine on the GPU. This point has also been mentioned in other relevant discussions, so implementing a unified memory abstraction or a unified shared memory address space from a software perspective may be necessary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement A feature or an optimization request
Projects
None yet
Development

No branches or pull requests

7 participants