-
-
Notifications
You must be signed in to change notification settings - Fork 241
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How should I configure GPU inference using the exe installation package? #86
Comments
Please check the version of CUDA and CUDNN:
|
@vietanhdev |
I face the same problem. I used CUDA 11.8 and cudnn 8.9.3 at the first time. It detected a segment successfully, then the program crashed immediately. I found this issue ticket, and installed the correct version of CUDA and cudnn, but still no luck. |
@ZJDATY I have same problem, Did you solve it? |
After selecting the model, it will report this error.
My local environment is cuda11.7+cudnn8.6+onnxruntime-gpu1.14.0. All of the above have added environment variables. CUDA_ PATH is also configured correctly. I have also placed the DLL file for onnxruntime-gpu1.14.0 in the running directory.
The model using YOLOV8N did not report any errors, but the task manager did not use GPU inference.
@vietanhdev May I ask what I should do?
The text was updated successfully, but these errors were encountered: