Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU Inference Problem? LoadLibrary failed with error 126 #11826

Closed
ArEnSc opened this issue Jun 12, 2022 · 9 comments
Closed

GPU Inference Problem? LoadLibrary failed with error 126 #11826

ArEnSc opened this issue Jun 12, 2022 · 9 comments

Comments

@ArEnSc
Copy link

ArEnSc commented Jun 12, 2022

Describe the bug
Using two different virtual environments, onnx can perform GPU inference for one environment.

However with the second environment onnx cannot perform GPU inference?

The bug or error message:
022-06-11 19:17:06.5510989 [E:onnxruntime:Default, provider_bridge_ort.cc:1022 onnxruntime::ProviderLibrary::Get] LoadLibrary failed with error 126 "" when trying to load "C:\Users\T\Desktop\SynthesisProduction\WESpeechSynthesisProductionEnv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"
2022-06-11 19:17:06.5596712 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:552 onnxruntime::python::CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.
2022-06-11 19:17:06.9589542 [E:onnxruntime:Default, provider_bridge_ort.cc:1022 onnxruntime::ProviderLibrary::Get] LoadLibrary failed with error 126 "" when trying to load "C:\Users\T\Desktop\SynthesisProduction\WESpeechSynthesisProductionEnv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"
2022-06-11 19:17:06.9665471 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:552 onnxruntime::python::CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.
The other environment just executes the gpu inference.

Urgency
None.

System information

  • OS Platform and Distribution: Windows 10 64bit

  • ONNX Runtime installed from (source or binary): Binary from Pip

  • ONNX Runtime version:
    onnxruntime==1.11.1
    onnxruntime-gpu==1.11.1

  • Python version:Python 3.8.7

  • Visual Studio version (if applicable):May 2022 (version 1.68)

  • GCC/Compiler version (if compiling from source):

  • CUDA/cuDNN version: N/A ?

  • GPU model and memory: 3080 RTX 10GB

To Reproduce
Not quite sure, but is there a way to debug and figure out why the other environment is loading this but the current environment isn't?

The environment where gpu execution works uses torch via pip3 for windows.

pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113

Expected behavior
Expect GPU inference to run.

Screenshots
If applicable, add screenshots to help explain your problem.

@tianleiwu
Copy link
Contributor

tianleiwu commented Jun 13, 2022

Look at https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html for required DLLs like

libcudart 11.4.43
libcufft 10.5.2.100
libcurand 10.2.5.120
libcublasLt 11.6.5.2
libcublas 11.6.5.2
libcudnn 8.2.2.26 (for Windows).

Make sure you set your the Path environment variable so that those DLLs are in your path. If you are not sure, just re-install corresponding version of CUDA 11.4 and CuDNN 8.2.2.26 (for Windows), and set Path for them.

@ArEnSc
Copy link
Author

ArEnSc commented Jun 13, 2022

Look at https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html for required DLLs like

libcudart 11.4.43
libcufft 10.5.2.100
libcurand 10.2.5.120
libcublasLt 11.6.5.2
libcublas 11.6.5.2
libcudnn 8.2.2.26 (for Windows).

Make sure you set your the Path environment variable so that those DLLs are in your path. If you are not sure, just re-install corresponding version of CUDA 11.4 and CuDNN 8.2.2.26 (for Windows), and set Path for them.

Is there a way to set the path in python and dynamically? I am looking to package this with pyinstaller thanks!

@snnn
Copy link
Member

snnn commented Jun 13, 2022

os.add_dll_directory

@ArEnSc
Copy link
Author

ArEnSc commented Jun 22, 2022

New news, so I checked the pathes of each virtual environment they were both the same.
The versions of python however are different.
So the one where cuda execution works is python 3.7.9
the one where it doesn't work and I tried calling os.add_dll_directory and pointing to the directories is python 3.8.7

@ArEnSc
Copy link
Author

ArEnSc commented Jun 22, 2022

import os

cuda_path_cudnn = os.path.join(r"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\cudnn-11.4-windows-x64-v8.2.2.26\cuda\bin")
print(cuda_path_cudnn)
cuda_1_4 = os.path.join(r"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.4\bin")
print(cuda_1_4)
                            26\cuda\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\cudnn-11.4-windows-x64-v8.2.2.26\cuda\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.4\bin                     nnxruntime::ProviderLibrary::Get] LoadLibrary failed with error 126 "" when trying to load "C:\Users\Tensor\Desktop\WESpeechS
Server Initializing                                                              api\onnxruntime_providers_cuda.dll"
2022-06-22 14:55:38.7527020 [E:onnxruntime:Default, provider_bridge_ort.cc:1022 o52 onnxruntime::python::CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnnnxruntime::ProviderLibrary::Get] LoadLibrary failed with error 126 "" when tryinements to ensure all dependencies are met.g to load "C:\Users\Tensor\Desktop\WESpeechSynthesisProduction\WESpeechSynthesisPnnxruntime::ProviderLibrary::Get] LoadLibrary failed with error 126 "" when trying to load "C:\Users\Tensor\Desktop\WESpeechSroductionEnv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"  api\onnxruntime_providers_cuda.dll"
2022-06-22 14:55:38.7607322 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:552 onnxruntime::python::CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onn52 onnxruntime::python::CreateExecutionProviderInstance] Failed to create CUDAExeements to ensure all dependencies are met.cutionProvider. Please reference https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.
2022-06-22 14:55:39.2136338 [E:onnxruntime:Default, provider_bridge_ort.cc:1022 onnxruntime::ProviderLibrary::Get] LoadLibrary failed with error 126 "" when trying to load "C:\Users\Tensor\Desktop\WESpeechSynthesisProduction\WESpeechSynthesisProductionEnv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"  
2022-06-22 14:55:39.2225498 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:552 onnxruntime::python::CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.

@snnn @tianleiwu

not working?

@ArEnSc
Copy link
Author

ArEnSc commented Jun 22, 2022

I also tried uninstalling onnxruntime-gpu deleting the virtual environment and reinstalling it.

@ArEnSc
Copy link
Author

ArEnSc commented Jun 22, 2022

Okay solution.
os.add_dll_directory doesn't work at all. I manually added to the path moved all the dlls to one folder and it worked fine.

cuda_path_cudnn = os.path.join(r"C:\Users\Tensor\Desktop\WESpeechSynthesisProduction\src\data") # moved all missing dlls to this folder.

os.environ['PATH'] = cuda_path_cudnn + os.pathsep + os.environ['PATH'] <--- works
print(cuda_path_cudnn)
#os.add_dll_directory(cuda_path_cudnn) <-- doesn't work

@zengjie617789
Copy link

check the version of between CUDA and onnxruntime , if not match, degrade onnxruntime.
As far as i am concerned, my CUDA version is 11.2 and onnxruntime is 1.7 which solved this error.

@tomgwasira
Copy link

Had this error. My problem was the ONNXRuntime version I was using did not support the TensorRT version I was using.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants