-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU Inference Problem? LoadLibrary failed with error 126 #11826
Comments
Look at https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html for required DLLs like
Make sure you set your the Path environment variable so that those DLLs are in your path. If you are not sure, just re-install corresponding version of CUDA 11.4 and CuDNN 8.2.2.26 (for Windows), and set Path for them. |
Is there a way to set the path in python and dynamically? I am looking to package this with pyinstaller thanks! |
os.add_dll_directory |
New news, so I checked the pathes of each virtual environment they were both the same. |
not working? |
I also tried uninstalling onnxruntime-gpu deleting the virtual environment and reinstalling it. |
Okay solution.
|
check the version of between CUDA and onnxruntime , if not match, degrade onnxruntime. |
Had this error. My problem was the ONNXRuntime version I was using did not support the TensorRT version I was using. |
Describe the bug
Using two different virtual environments, onnx can perform GPU inference for one environment.
However with the second environment onnx cannot perform GPU inference?
The bug or error message:
022-06-11 19:17:06.5510989 [E:onnxruntime:Default, provider_bridge_ort.cc:1022 onnxruntime::ProviderLibrary::Get] LoadLibrary failed with error 126 "" when trying to load "C:\Users\T\Desktop\SynthesisProduction\WESpeechSynthesisProductionEnv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"
2022-06-11 19:17:06.5596712 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:552 onnxruntime::python::CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.
2022-06-11 19:17:06.9589542 [E:onnxruntime:Default, provider_bridge_ort.cc:1022 onnxruntime::ProviderLibrary::Get] LoadLibrary failed with error 126 "" when trying to load "C:\Users\T\Desktop\SynthesisProduction\WESpeechSynthesisProductionEnv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"
2022-06-11 19:17:06.9665471 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:552 onnxruntime::python::CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/reference/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.
The other environment just executes the gpu inference.
Urgency
None.
System information
OS Platform and Distribution: Windows 10 64bit
ONNX Runtime installed from (source or binary): Binary from Pip
ONNX Runtime version:
onnxruntime==1.11.1
onnxruntime-gpu==1.11.1
Python version:Python 3.8.7
Visual Studio version (if applicable):May 2022 (version 1.68)
GCC/Compiler version (if compiling from source):
CUDA/cuDNN version: N/A ?
GPU model and memory: 3080 RTX 10GB
To Reproduce
Not quite sure, but is there a way to debug and figure out why the other environment is loading this but the current environment isn't?
The environment where gpu execution works uses torch via pip3 for windows.
pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113
Expected behavior
Expect GPU inference to run.
Screenshots
If applicable, add screenshots to help explain your problem.
The text was updated successfully, but these errors were encountered: