You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?
我已经搜索过FAQ | I have searched FAQ
当前行为 | Current Behavior
请问全参微调需要多大的显存,我用了7块40G的显卡跑,但是还是out of memory,我将model_max_length改为512还是不行,我还应该修改哪些参数?
报错信息:
torch.cuda.OutOfMemoryError : self.optimizer.step()CUDA out of memory. Tried to allocate 4.54 GiB. GPU 6 has a total capacty of 39.39 GiB of which 590.06 MiB is free. Including non-PyTorch memory, this process has 38.81 GiB memory in use. Of the allocated memory 34.27 GiB is allocated by PyTorch, and 2.76 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
torch.cudatorch.cuda..OutOfMemoryErrorOutOfMemoryError: : CUDA out of memory. Tried to allocate 4.54 GiB. GPU 0 has a total capacty of 39.39 GiB of which 596.06 MiB is free. Including non-PyTorch memory, this process has 38.81 GiB memory in use. Of the allocated memory 34.26 GiB is allocated by PyTorch, and 2.76 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF CUDA out of memory. Tried to allocate 4.54 GiB. GPU 2 has a total capacty of 39.39 GiB of which 542.06 MiB is free. Including non-PyTorch memory, this process has 38.86 GiB memory in use. Of the allocated memory 34.32 GiB is allocated by PyTorch, and 2.76 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 4.54 GiB. GPU 3 has a total capacty of 39.39 GiB of which 510.06 MiB is free. Including non-PyTorch memory, this process has 38.89 GiB memory in use. Of the allocated memory 34.35 GiB is allocated by PyTorch, and 2.76 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
OutOfMemoryError: CUDA out of memory. Tried to allocate 4.54 GiB. GPU 4 has a total capacty of 39.39 GiB of which 430.06 MiB is free. Including non-PyTorch memory, this process has 38.97 GiB memory in use. Of the allocated memory 34.43 GiB is allocated by PyTorch, and 2.76 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 4.54 GiB. GPU 5 has a total capacty of 39.39 GiB of which 542.06 MiB is free. Including non-PyTorch memory, this process has 38.86 GiB memory in use. Of the allocated memory 34.32 GiB is allocated by PyTorch, and 2.76 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 4.54 GiB. GPU 1 has a total capacty of 39.39 GiB of which 478.06 MiB is free. Including non-PyTorch memory, this process has 38.92 GiB memory in use. Of the allocated memory 34.38 GiB is allocated by PyTorch, and 2.76 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
GPU内存利用率:
期望行为 | Expected Behavior
No response
复现方法 | Steps To Reproduce
No response
运行环境 | Environment
- Python:3.10
- Transformers:4.40.0
- PyTorch:2.1.2
- CUDA 11.8
备注 | Anything else?
No response
The text was updated successfully, but these errors were encountered:
是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?
当前行为 | Current Behavior
请问全参微调需要多大的显存,我用了7块40G的显卡跑,但是还是out of memory,我将model_max_length改为512还是不行,我还应该修改哪些参数?
报错信息:
torch.cuda.OutOfMemoryError : self.optimizer.step()CUDA out of memory. Tried to allocate 4.54 GiB. GPU 6 has a total capacty of 39.39 GiB of which 590.06 MiB is free. Including non-PyTorch memory, this process has 38.81 GiB memory in use. Of the allocated memory 34.27 GiB is allocated by PyTorch, and 2.76 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
torch.cudatorch.cuda..OutOfMemoryErrorOutOfMemoryError: : CUDA out of memory. Tried to allocate 4.54 GiB. GPU 0 has a total capacty of 39.39 GiB of which 596.06 MiB is free. Including non-PyTorch memory, this process has 38.81 GiB memory in use. Of the allocated memory 34.26 GiB is allocated by PyTorch, and 2.76 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF CUDA out of memory. Tried to allocate 4.54 GiB. GPU 2 has a total capacty of 39.39 GiB of which 542.06 MiB is free. Including non-PyTorch memory, this process has 38.86 GiB memory in use. Of the allocated memory 34.32 GiB is allocated by PyTorch, and 2.76 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 4.54 GiB. GPU 3 has a total capacty of 39.39 GiB of which 510.06 MiB is free. Including non-PyTorch memory, this process has 38.89 GiB memory in use. Of the allocated memory 34.35 GiB is allocated by PyTorch, and 2.76 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
OutOfMemoryError: CUDA out of memory. Tried to allocate 4.54 GiB. GPU 4 has a total capacty of 39.39 GiB of which 430.06 MiB is free. Including non-PyTorch memory, this process has 38.97 GiB memory in use. Of the allocated memory 34.43 GiB is allocated by PyTorch, and 2.76 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 4.54 GiB. GPU 5 has a total capacty of 39.39 GiB of which 542.06 MiB is free. Including non-PyTorch memory, this process has 38.86 GiB memory in use. Of the allocated memory 34.32 GiB is allocated by PyTorch, and 2.76 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 4.54 GiB. GPU 1 has a total capacty of 39.39 GiB of which 478.06 MiB is free. Including non-PyTorch memory, this process has 38.92 GiB memory in use. Of the allocated memory 34.38 GiB is allocated by PyTorch, and 2.76 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
GPU内存利用率:
期望行为 | Expected Behavior
No response
复现方法 | Steps To Reproduce
No response
运行环境 | Environment
备注 | Anything else?
No response
The text was updated successfully, but these errors were encountered: