-
-
Notifications
You must be signed in to change notification settings - Fork 800
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pipeline.to(torch.device("cuda")) not working on T4 Tesla GPU (pyannote==3.0.0) #1475
Comments
Thank you for your issue.You might want to check the FAQ if you haven't done so already. Feel free to close this issue if you found an answer in the FAQ. If your issue is a feature request, please read this first and update your request accordingly, if needed. If your issue is a bug report, please provide a minimum reproducible example as a link to a self-contained Google Colab notebook containing everthing needed to reproduce the bug:
Providing an MRE will increase your chance of getting an answer from the community (either maintainers or other power users). Companies relying on
|
I am facing the same issue. If I understand correctly, this is due to speaker embedding model running with onnx (credits to #1476). But I am not sure that reverting back to |
Switching back to
|
Could you try with this and let me know if that allows to run on GPU on your side? pip install https://github.com/pyannote/pyannote-audio/archive/refs/heads/fix/onnxruntime-gpu.zip All it does is switch from |
Just to clarify, I only switched to the older embedding model to check if the problem was the newer model, just for diagnostics. |
I notice that |
Hey @gau-nernst, I opened a related issue here #1481. Please continue the discussion there. |
FYI: #1537 |
Latest version no longer relies on ONNX runtime. |
I've been testing out today the new pyannote 3.0.0 but it seems that adding
import torch pipeline.to(torch.device("cuda"))
to my code does not allocate the pipeline to the GPU anymore.
I have tried the following:
But nothing seems to work.
When I type
pipeline.device
after applying the configuration, it returnsdevice(type='cuda')
, but it is still not using it. This is what thenvidia-smi
returns WHILE THE PIPELINE IS RUNNING:Colab notebook to reproduce the issue (MRE): https://colab.research.google.com/drive/16zpDvNa5fUs8a_r-d-DxbdAdLHEPrgta?usp=sharing
PS: this was working with the Interspeech and 2022.07 checkpoints and the previous version.
Edit: I did some testing and the problem seems to be the embedding model, I tried using the "speechbrain/spkrec-ecapa-voxceleb" embedding model by editing the config.yaml file and, in this case, the GPU was properly used.
The text was updated successfully, but these errors were encountered: