-
-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] [GPU] Could not load library libcudnn_ops_infer.so.8. Error: libcudnn_ops_infer.so.8: cannot open shared object file: No such file or directory #15
Comments
Thanks for opening your first issue here! Be sure to follow the relevant issue templates, or risk having this issue marked as invalid. |
As a work around it appears that I can do the following.
Install torch
exit the container bash Create a .bashrc file under the /config directory (vim is not installed on the container so I used the host for this)
with the contents:
Then restarted my container
|
This worked for me too. Thank you for the suggestion. |
This issue has been automatically marked as stale because it has not had recent activity. This might be due to missing feedback from OP. It will be closed if no further activity occurs. Thank you for your contributions. |
I had this same issue, I'm no expert but is this because the docker file is installing libs for |
upstream project wants cu11 iirc |
It looks like upstream has switched the default recommendation to CUDA 12 SYSTRAN/faster-whisper@3d1de60, with the caveat that this may break some CUDA 11 setups, but I don't think we can win on that because the same version of ctranslate2 won't support both 11 and 12 and I don't really want a) A 5Gb+ image or b) two different branches for different versions. |
Also looks like nvidia-cudnn-cu12 version 9+ has issues, so it's going to need pinning |
Please try |
This version appears to be working without the .bashrc work around |
PR has been merged, new image should be built in the next ~30 mins. |
Error
I'm running into an issue with the latest version and |
I'm also getting this error with the tool i'm developing using this image.
I had this error locally on the host as well, and had to add the LD_LIBRARY_PATH to my env var to get it working. I see the workaround above, but it also say's it's fixed. Is there any reason I still can't run faster-whisper commands? Update: Ran this inside my docker container
Then copied the output to my dockerfile to get things working. It's basically the same workaround as before. |
This issue is locked due to inactivity |
Is there an existing issue for this?
Current Behavior
I'm using the lscr.io/linuxserver/faster-whisper:gpu and I'm encountering issues where any Wyoming prompt results in the following error:
It appears to be related to this behavior in faster-whisper
SYSTRAN/faster-whisper#516
Expected Behavior
faster-whisper is able to use the GPU to parse speech to text
Steps To Reproduce
Setup the faster-whisper docker container per below
Added faster-whisper to Home Assistant using the Wyoming protocol
Setup a Raspberry PI 3+ with wyoming-satellite per https://github.com/rhasspy/wyoming-satellite/blob/master/docs/tutorial_installer.md
Prompts are responded (local wyoming-wakeword.service) to but in the logs on the docker container indicate an error
Logs for docker container lscr.io/linuxserver/faster-whisper:gpu
Logs for wyoming-satellite.service
Environment
The text was updated successfully, but these errors were encountered: