-
-
Notifications
You must be signed in to change notification settings - Fork 800
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: fix WeSpeakerPretrainedSpeakerEmbedding GPU support #1478
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot. That really helped me narrow things down. I think the issue is that default I just pushed a new commit. Can you try again? |
I was testing the solution in an isolated environment using the docker image First I tested only adding onnxruntime-gpu==1.16.0 to my requirements along with pyannote.audio==3.0.0, but the time didn't change and the GPU was not used. Second, I tried using only this commit and it still got the same time and no gpu being used. What I want to point out here is that GPU IS NOT BEING USED even though onnxruntime-gpu is installed. Is it possible that we need to allocate the pipeline to the GPU in a different manner? Using the onnx library for instance? Since you've pushed another commit, I'll build the image again and I'll comeback here with the results. |
@hbredin still not working with the new commit, I still get the same embedding time and the GPU is not being used. Here's a snippet of nvidia-smi while the embedding was at 40%
I remind you that this is in an isolated environment using a docker container. The GPU works fine for the older diarization pipeline @2.1 and for the faster_whisper algorithm, but for the embedding model of the new pipeline, it does not work. |
That's weird because it solves the issue on Google Colab. |
I have no knowledge of Could it be something related to an incompatibility between Are you 100% sure that it used the latest commit and no cache was used? |
Yes, I am sure, I rebuilt the image from scratch and checked if your commit was in fact in the code. I'll go check the colab with your solution. As you mentioned, the problem might be a dependency problem with the specific image that I'm using in docker. I'll check it out and let you know. Just FYI, docker containers are isolated environment that only run what we need for the application that we're using. It should work for all cases, not only for colab. Edit: Indeed it worked in my MRE colab. I'll check it out in my docker container to see if I can make it work. |
Update: I did a pip install --force-reinstall onnxruntime-gpu and it worked on the docker container, but when loading the pipeline, I got the following warning:
Do you know what it might be? I believe I know what the problem is, I'm also installing faster_whisper in this environment, and faster_whisper's requirements are:
So, it installs the onnxruntime and your library is installing onnxruntime-gpu. I'll see if I can sort this out. I believe you may complete this pull request, this is a problem at my end and your code is working. Question: Will you publish this alterations on the pypi package? Like a 3.0.1 version? |
Thanks. Will make a few more tests on my side and will then merge.
Yes, it will be released as 3.0.1. |
Just as a sidenote, I believe your model will be used with faster_whisper, and using onnxruntime-gpu may make it incompatible with that library. I am going to run a few more tests and let you know my results, but, so far, faster_whisper stopped working when I uninstalled onnxruntime to leave only onnxruntime-gpu. Do you believe there is another alternative? Like porting your model outside of onnx? I posted an issue on faster_whisper's repo to address the situation. |
The point is that this is not my model. Working on it, though ;-) |
Oh, fair enough, but is it possible to convert it for not using onnx? |
Issue #1477 has already been opened related to this particular aspect. |
I just released 3.0.1, including this fix. |
Awesome! Just checked pypi, great job! fyi, I believe it's not still showing on github's releases yet. |
pyannote 3.0.0 has a bug where the new embedding model does not run on the GPU. This is fixed in version 3.0.1 via pyannote/pyannote-audio#1478.
FYI: #1537 |
Should fix #1475 #1476
I would love feedback from @doublex @guilhermehge @realfolkcode