Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] ONNX not working with CUDA #336

Open
vivi90 opened this issue Aug 15, 2022 · 4 comments
Open

[Bug] ONNX not working with CUDA #336

vivi90 opened this issue Aug 15, 2022 · 4 comments

Comments

@vivi90
Copy link
Contributor

vivi90 commented Aug 15, 2022

Steps to reproduce

conda create -n romp python=3.10
conda activate romp
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113
pip install simple-romp cython
romp --mode=webcam --show -t --onnx

Error message

RuntimeError: ... onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "...\onnxruntime_providers_tensorrt.dll"

But the file D:\miniconda3\envs\romp\lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll exists. 🤔

Full error message

To use onnx model, we need to install the onnxruntime python package. Please install it by youself if failed!
Collecting onnxruntime-gpu
  Downloading onnxruntime_gpu-1.12.1-cp310-cp310-win_amd64.whl (110.8 MB)
     ---------------------------------------- 110.8/110.8 MB 4.2 MB/s eta 0:00:00
Collecting coloredlogs
  Downloading coloredlogs-15.0.1-py2.py3-none-any.whl (46 kB)
     ---------------------------------------- 46.0/46.0 kB 1.2 MB/s eta 0:00:00
Collecting sympy
  Downloading sympy-1.10.1-py3-none-any.whl (6.4 MB)
     ---------------------------------------- 6.4/6.4 MB 8.9 MB/s eta 0:00:00
Collecting protobuf
  Downloading protobuf-4.21.5-cp310-abi3-win_amd64.whl (525 kB)
     ---------------------------------------- 525.5/525.5 kB 5.5 MB/s eta 0:00:00
Collecting flatbuffers
  Using cached flatbuffers-2.0-py2.py3-none-any.whl (26 kB)
Requirement already satisfied: numpy>=1.21.0 in d:\miniconda3\envs\romp\lib\site-packages (from onnxruntime-gpu) (1.23.1)
Requirement already satisfied: packaging in d:\miniconda3\envs\romp\lib\site-packages (from onnxruntime-gpu) (21.3)
Collecting humanfriendly>=9.1
  Downloading humanfriendly-10.0-py2.py3-none-any.whl (86 kB)
     ---------------------------------------- 86.8/86.8 kB 981.5 kB/s eta 0:00:00
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in d:\miniconda3\envs\romp\lib\site-packages (from packaging->onnxruntime-gpu) (3.0.9)
Collecting mpmath>=0.19
  Downloading mpmath-1.2.1-py3-none-any.whl (532 kB)
     ---------------------------------------- 532.6/532.6 kB 4.2 MB/s eta 0:00:00
Collecting pyreadline3
  Downloading pyreadline3-3.4.1-py3-none-any.whl (95 kB)
     ---------------------------------------- 95.2/95.2 kB 2.7 MB/s eta 0:00:00
Installing collected packages: pyreadline3, mpmath, flatbuffers, sympy, protobuf, humanfriendly, coloredlogs, onnxruntime-gpu
Successfully installed coloredlogs-15.0.1 flatbuffers-2.0 humanfriendly-10.0 mpmath-1.2.1 onnxruntime-gpu-1.12.1 protobuf-4.21.5 pyreadline3-3.4.1 sympy-1.10.1
creating onnx model
Traceback (most recent call last):
  File "D:\miniconda3\envs\romp\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "D:\miniconda3\envs\romp\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "D:\miniconda3\envs\romp\Scripts\romp.exe\__main__.py", line 7, in <module>
  File "D:\miniconda3\envs\romp\lib\site-packages\romp\main.py", line 177, in main
    romp = ROMP(args)
  File "D:\miniconda3\envs\romp\lib\site-packages\romp\main.py", line 67, in __init__
    self._build_model_()
  File "D:\miniconda3\envs\romp\lib\site-packages\romp\main.py", line 87, in _build_model_
    self.ort_session = onnxruntime.InferenceSession(self.settings.model_onnx_path,\
  File "D:\miniconda3\envs\romp\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 347, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "D:\miniconda3\envs\romp\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 395, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1029 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\miniconda3\envs\romp\lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll"

System

  • OS: Windows 10
  • Cuda:
    ...
    Built on Fri_Dec_17_18:28:54_Pacific_Standard_Time_2021
    Cuda compilation tools, release 11.6, V11.6.55
    Build cuda_11.6.r11.6/compiler.30794723_0
    
@vivi90
Copy link
Contributor Author

vivi90 commented Aug 15, 2022

Using CUDA toolkit 11.6 (pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116) instead, did not work.

@zhanghongyong123456
Copy link

Steps to reproduce

conda create -n romp python=3.10
conda activate romp
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113
pip install simple-romp cython
romp --mode=webcam --show -t --onnx

Error message

RuntimeError: ... onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "...\onnxruntime_providers_tensorrt.dll"

But the file D:\miniconda3\envs\romp\lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll exists. 🤔

Full error message

To use onnx model, we need to install the onnxruntime python package. Please install it by youself if failed!
Collecting onnxruntime-gpu
  Downloading onnxruntime_gpu-1.12.1-cp310-cp310-win_amd64.whl (110.8 MB)
     ---------------------------------------- 110.8/110.8 MB 4.2 MB/s eta 0:00:00
Collecting coloredlogs
  Downloading coloredlogs-15.0.1-py2.py3-none-any.whl (46 kB)
     ---------------------------------------- 46.0/46.0 kB 1.2 MB/s eta 0:00:00
Collecting sympy
  Downloading sympy-1.10.1-py3-none-any.whl (6.4 MB)
     ---------------------------------------- 6.4/6.4 MB 8.9 MB/s eta 0:00:00
Collecting protobuf
  Downloading protobuf-4.21.5-cp310-abi3-win_amd64.whl (525 kB)
     ---------------------------------------- 525.5/525.5 kB 5.5 MB/s eta 0:00:00
Collecting flatbuffers
  Using cached flatbuffers-2.0-py2.py3-none-any.whl (26 kB)
Requirement already satisfied: numpy>=1.21.0 in d:\miniconda3\envs\romp\lib\site-packages (from onnxruntime-gpu) (1.23.1)
Requirement already satisfied: packaging in d:\miniconda3\envs\romp\lib\site-packages (from onnxruntime-gpu) (21.3)
Collecting humanfriendly>=9.1
  Downloading humanfriendly-10.0-py2.py3-none-any.whl (86 kB)
     ---------------------------------------- 86.8/86.8 kB 981.5 kB/s eta 0:00:00
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in d:\miniconda3\envs\romp\lib\site-packages (from packaging->onnxruntime-gpu) (3.0.9)
Collecting mpmath>=0.19
  Downloading mpmath-1.2.1-py3-none-any.whl (532 kB)
     ---------------------------------------- 532.6/532.6 kB 4.2 MB/s eta 0:00:00
Collecting pyreadline3
  Downloading pyreadline3-3.4.1-py3-none-any.whl (95 kB)
     ---------------------------------------- 95.2/95.2 kB 2.7 MB/s eta 0:00:00
Installing collected packages: pyreadline3, mpmath, flatbuffers, sympy, protobuf, humanfriendly, coloredlogs, onnxruntime-gpu
Successfully installed coloredlogs-15.0.1 flatbuffers-2.0 humanfriendly-10.0 mpmath-1.2.1 onnxruntime-gpu-1.12.1 protobuf-4.21.5 pyreadline3-3.4.1 sympy-1.10.1
creating onnx model
Traceback (most recent call last):
  File "D:\miniconda3\envs\romp\lib\runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "D:\miniconda3\envs\romp\lib\runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "D:\miniconda3\envs\romp\Scripts\romp.exe\__main__.py", line 7, in <module>
  File "D:\miniconda3\envs\romp\lib\site-packages\romp\main.py", line 177, in main
    romp = ROMP(args)
  File "D:\miniconda3\envs\romp\lib\site-packages\romp\main.py", line 67, in __init__
    self._build_model_()
  File "D:\miniconda3\envs\romp\lib\site-packages\romp\main.py", line 87, in _build_model_
    self.ort_session = onnxruntime.InferenceSession(self.settings.model_onnx_path,\
  File "D:\miniconda3\envs\romp\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 347, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "D:\miniconda3\envs\romp\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 395, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1029 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "D:\miniconda3\envs\romp\lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll"

System

  • OS: Windows 10
  • Cuda:
    ...
    Built on Fri_Dec_17_18:28:54_Pacific_Standard_Time_2021
    Cuda compilation tools, release 11.6, V11.6.55
    Build cuda_11.6.r11.6/compiler.30794723_0
    

hi, i have same issue,How did you solve it

@vivi90
Copy link
Contributor Author

vivi90 commented Oct 9, 2022

@zhanghongyong123456

hi, i have same issue,How did you solve it

Still not solved.
So using CUDA without ONNX for now.

@zhangtaiyu
Copy link

Fix this error you need to install Tensorrt from https://developer.nvidia.com/tensorrt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants