-
Notifications
You must be signed in to change notification settings - Fork 769
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
04-GLM-4-9B-Chat vLLM 部署调用显存溢出问题 #162
Comments
按顺序跑,出问题的这一步配置在最后的测评脚本吗还是哪里没说清楚~ |
参考issue |
把这个值调低一点就可以了,gpu_memory_utilization 0.60或者更小 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
1、在服务器上4张T4卡上尝试运行vllm_demo,首先是告诉我T4不支持bfloat精度,ValueError: Bfloat16 is only supported on GPUs with compute capability of at least 8.0. Your Tesla T4 GPU has compute capability 7.5. You can use float16 instead by explicitly setting the
dtype
flag in CLI, for example: --dtype=half.2、前面3个步骤fastapi、streamlit、langchain跟着教程做都能运行,所以很奇怪,
3、然后换了精度为“float16”,这时候报错:orch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 214.00 MiB. GPU 0 has a total capacty of 14.58 GiB of which 127.56 MiB is free. Including non-PyTorch memory, this process has 14.45 GiB memory in use. Of the allocated memory 14.19 GiB is allocated by PyTorch, and 1.15 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
4、修改了max_model_len,max_tokens等都没用
The text was updated successfully, but these errors were encountered: