You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ubuntu 24.04, CUDA 12.4, CuDNN 9.6.0, onnxruntime-gpu 1.20.1, Nvidia Driver Version 550.120
Issue description
Hello! I encounter a problem: a speaker-diarization-3.0 pipeline don't load in my environment (3.1 version does load):
from pyannote.audio import Pipeline
import torch
import os
pipeline = Pipeline.from_pretrained(
'pyannote/speaker-diarization-3.0',
use_auth_token=os.getenv('HF_TOKEN')
).to(torch.device('cuda'))
Output:
INFO:speechbrain.utils.quirks:Applied quirks (see speechbrain.utils.quirks): [disable_jit_profiling, allow_tf32]
INFO:speechbrain.utils.quirks:Excluded quirks specified by the SB_DISABLE_QUIRKS environment (comma-separated list): []
2024-12-15 14:28:01.224680625 [E:onnxruntime:Default, provider_bridge_ort.cc:1862 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1539 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: /lib/x86_64-linux-gnu/libcudnn_heuristic.so.9: undefined symbol: _ZTVN5cudnn7backend23PagedCacheLoadOperationE, version libcudnn_graph.so.9
2024-12-15 14:28:01.224692208 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:993 CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Require cuDNN 9.* and CUDA 12.*. Please install all dependencies as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
As the error message suggests, I need cuDNN 9.* and CUDA 12.*, however I already have them installed.
Does this mean that this speaker-diarization-3.0 is outdated and won't work in a new versions of CUDA or pyannote-audio?
Minimal reproduction example (MRE)
What about MRE colab notebook: in this notebook the problem does not occur. However, Colab has CUDA 12.2 which is different. And at the same time Colab does not even have onnxruntime-gpu installed, so ONNX in Colab is installed in some other way i'm not sure about, and don't know how to check the version.
The text was updated successfully, but these errors were encountered:
Tested versions
pyannote-audio=3.3.2
System information
Ubuntu 24.04, CUDA 12.4, CuDNN 9.6.0, onnxruntime-gpu 1.20.1, Nvidia Driver Version 550.120
Issue description
Hello! I encounter a problem: a
speaker-diarization-3.0
pipeline don't load in my environment (3.1 version does load):Output:
INFO:speechbrain.utils.quirks:Applied quirks (see
speechbrain.utils.quirks
): [disable_jit_profiling, allow_tf32]INFO:speechbrain.utils.quirks:Excluded quirks specified by the
SB_DISABLE_QUIRKS
environment (comma-separated list): []2024-12-15 14:28:01.224680625 [E:onnxruntime:Default, provider_bridge_ort.cc:1862 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1539 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: /lib/x86_64-linux-gnu/libcudnn_heuristic.so.9: undefined symbol: _ZTVN5cudnn7backend23PagedCacheLoadOperationE, version libcudnn_graph.so.9
2024-12-15 14:28:01.224692208 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:993 CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Require cuDNN 9.* and CUDA 12.*. Please install all dependencies as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
As the error message suggests, I need cuDNN 9.* and CUDA 12.*, however I already have them installed.
Does this mean that this
speaker-diarization-3.0
is outdated and won't work in a new versions of CUDA or pyannote-audio?Minimal reproduction example (MRE)
What about MRE colab notebook: in this notebook the problem does not occur. However, Colab has CUDA 12.2 which is different. And at the same time Colab does not even have
onnxruntime-gpu
installed, so ONNX in Colab is installed in some other way i'm not sure about, and don't know how to check the version.The text was updated successfully, but these errors were encountered: