-
Notifications
You must be signed in to change notification settings - Fork 22k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
version libcudnn_ops_infer.so.8 not defined in file libcudnn_ops_infer.so.8 with link time reference #104591
Comments
Based on your
Smoke test:
However, your error message points to the system-wide library install, which should not be used. |
I encountered the same problem, how did you solve it? |
bro, me too, how did you solve it? |
need install cudnn 8
|
The problem persists even after conda install |
@aradhyamathur What is about https://anaconda.org/conda-forge/cudatoolkit-dev? Still persists? And have u tried cudnn version 8? |
Having a similar issue. I'm trying to get Faster Whisper to run off a docker build. I'm trying to use the docker image: Unfortunately, getting this libcudnn_ops_infer.so.8 issue as well. Anyone know how I might add the necessary additional libraries? I can't use the official Nvidia docker image it seems (was too large for my smaller system to handle). |
In teriminal, |
My problem is I'm building then running this directly in a Google Cloud VM. Do you know if there's any way to do this via my Docker file? |
@jS5t3r it seems the issue was perhaps coming from the conda, not really sure , creating a new env and installing with pip worked for me. |
I updated my docker file like so (I got the LD_LIBRARY_PATH by testing on my VM by running python3 -c 'import os; import nvidia.cublas.lib; import nvidia.cudnn.lib; print(os.path.dirname(nvidia.cublas.lib.file) + ":" + os.path.dirname(nvidia.cudnn.lib.file))' )
Could not load library libcudnn_cnn_infer.so.8. Error: /opt/conda/lib/python3.10/site-packages/nvidia/cudnn/lib/libcudnn_cnn_infer.so.8: undefined symbol: _ZN11nvrtcHelper4loadEb, version libcudnn_ops_infer.so.8 Seems like it's close, just missing one thing on compatibility. |
|
Appreciate the suggestion! Unfortunately, this made my container size too large for my VM to handle; is there an alternative that doesn't install as much? I'm still confused why the original pytorch/pytorch:2.1.0-cuda11.8-cudnn8-runtime doesn't accomplish what I need. |
The correct path to the I got it working using |
I remember that in early years, both pytorch and tensorflow loaded cuda and cudnn from the system. |
![Uploading 微信图片_20240414153931.pn |
It really works! |
may export LD_LIBRARY_PATH=/home/zzm/anaconda3/envs/ttskit-new/lib/python3.8/site-packages/nvidia/cudnn/lib:$LD_LIBRARY_PATH |
Thank you, although it didn't work, I tried |
This worked for me!!! Setting the path before $LD_LIBRARY_PATH is crucial. |
Issue description
when I load one torch model ,it works,but when I load two torch models,it shows:
but I can found libcudnn_cnn_infer.so.8 file in this path:/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8
Code example
error information
then I just load one tts model,it works
System Info
cc @ptrblck
The text was updated successfully, but these errors were encountered: