Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

speaker-diarization-3.0 won't load, ONNX error #1814

Closed
sedol1339 opened this issue Dec 15, 2024 · 2 comments
Closed

speaker-diarization-3.0 won't load, ONNX error #1814

sedol1339 opened this issue Dec 15, 2024 · 2 comments

Comments

@sedol1339
Copy link

sedol1339 commented Dec 15, 2024

Tested versions

pyannote-audio=3.3.2

System information

Ubuntu 24.04, CUDA 12.4, CuDNN 9.6.0, onnxruntime-gpu 1.20.1, Nvidia Driver Version 550.120

Issue description

Hello! I encounter a problem: a speaker-diarization-3.0 pipeline don't load in my environment (3.1 version does load):

from pyannote.audio import Pipeline
import torch
import os

pipeline = Pipeline.from_pretrained(
    'pyannote/speaker-diarization-3.0',
    use_auth_token=os.getenv('HF_TOKEN')
).to(torch.device('cuda'))

Output:

INFO:speechbrain.utils.quirks:Applied quirks (see speechbrain.utils.quirks): [disable_jit_profiling, allow_tf32]
INFO:speechbrain.utils.quirks:Excluded quirks specified by the SB_DISABLE_QUIRKS environment (comma-separated list): []
2024-12-15 14:28:01.224680625 [E:onnxruntime:Default, provider_bridge_ort.cc:1862 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1539 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: /lib/x86_64-linux-gnu/libcudnn_heuristic.so.9: undefined symbol: _ZTVN5cudnn7backend23PagedCacheLoadOperationE, version libcudnn_graph.so.9
2024-12-15 14:28:01.224692208 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:993 CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Require cuDNN 9.* and CUDA 12.*. Please install all dependencies as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.

As the error message suggests, I need cuDNN 9.* and CUDA 12.*, however I already have them installed.

Does this mean that this speaker-diarization-3.0 is outdated and won't work in a new versions of CUDA or pyannote-audio?

Minimal reproduction example (MRE)

What about MRE colab notebook: in this notebook the problem does not occur. However, Colab has CUDA 12.2 which is different. And at the same time Colab does not even have onnxruntime-gpu installed, so ONNX in Colab is installed in some other way i'm not sure about, and don't know how to check the version.

@clement-pages
Copy link
Collaborator

Hey @sedol1339,

As mentionned in pyannote/speaker-diarization-3.1:

This pipeline is the same as pyannote/speaker-diarization-3.0 except it removes the problematic use of onnxruntime.

So using pyannote/speaker-diarization-3.1 should fix your issue with onnxruntime.

@sedol1339
Copy link
Author

Ok thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants