Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cudnn_cnn_infer64_8.dll not located. Please advice. #5

Open
Mayonezyck opened this issue Jul 7, 2024 · 9 comments
Open

Cudnn_cnn_infer64_8.dll not located. Please advice. #5

Mayonezyck opened this issue Jul 7, 2024 · 9 comments

Comments

@Mayonezyck
Copy link

2024-07-06 20:14:52.028 | INFO | asr.asr_with_vad:_process_detected_audio:222 - Detected pause after speech. Processing...
2024-07-06 20:14:52.028 | INFO | asr.asr_with_vad:_process_detected_audio:224 - Stopping listening...
Could not locate cudnn_cnn_infer64_8.dll. Please make sure it is in your library path!


Above is the error code.
Platform: Windows 10
Graphic card: NVIDIA-2060
Cudnn installed using
py -m pip install nvidia-cudnn-cu12

@t41372
Copy link
Owner

t41372 commented Jul 7, 2024

I'm not entirely sure. Maybe you can check out this issue for potential solutions. microsoft/onnxruntime#18973

The problem happens in the inference stage of the speech recognition module after voice activation detection. Which voice recognition are you using? Are you using faster-whisper?

If you are using faster-whisper, maybe check their documentation to see if anything is missing. I should probably add this to my documentation...

I'm working on dockerizing this program with Nvidia GPU passthrough, which may potentially solve your problem.

@Mayonezyck
Copy link
Author

Cool! Great to hear! Thanks for your reply! I have been stepping through and trying to see which part it got stuck in. Yes I'm using the faster-whisper as default. Apparently it got stuck in the function
transcribe_np(self, audio: np.ndarray) -> str:
segments, info = self.model.transcribe( audio, beam_size=5 if self.BEAM_SEARCH else 1, language=self.LANG, condition_on_previous_text=False, )

@Mayonezyck
Copy link
Author

Cool! Great to hear! Thanks for your reply! I have been stepping through and trying to see which part it got stuck in. Yes I'm using the faster-whisper as default. Apparently it got stuck in the function transcribe_np(self, audio: np.ndarray) -> str: segments, info = self.model.transcribe( audio, beam_size=5 if self.BEAM_SEARCH else 1, language=self.LANG, condition_on_previous_text=False, )

Probably it's because I need to install the specific CUDA and CUDNN for my 2060... Can you please share your CUDA and Cudnn versions?

@t41372
Copy link
Owner

t41372 commented Jul 7, 2024

Well... I'm using an apple silicon Mac so I don't use cuda. I hadn't actually tried running this project on an Nvidia machine yet

@t41372
Copy link
Owner

t41372 commented Jul 7, 2024

I created the dockerfile and added some docs in the readme for the Nvidia GPU passthrough container. It uses cuda:11.2.2-cudnn8. However, I haven't had the chance to test it. If you feel stuck on fixing Cuda issues, maybe you can take some inspiration from it or just help me test the Nvidia container, which still has a lot of issues, but they are a different set of issues, I guess...
By the way, let me know when your issue is resolved.

@Mayonezyck
Copy link
Author

I will let you know how that goes! Meanwhile, I'm going to test it on my M1 laptop. I will try out docker. Will let you know!

@Mayonezyck
Copy link
Author

update! May not be helpful. I didn't have a chance to try the docker image yet. But your repo works for my 4090 setup on a ubuntu20.04 system where cuda and cudnn are correctly set up. So it's a user error on my windows computer!

@tianleiwu
Copy link

tianleiwu commented Jul 29, 2024

Note that there are cudnn 8 and cudnn 9. The command to install onnxruntime for cuda 11 and 12 are different. See the following for detail:
https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements

onnxruntime-gpu for cuda 11 need cudnn 8, you will need pip install nvidia-cudnn-cu11==8.9.7.29
onnxruntime-gpu 1.8.1 for cuda 12 need cudnn 9, older version use cudnn 8. For cudnn 9, you can install like pip install nvidia-cudnn-cu12==9.2.1.18

@ghost
Copy link

ghost commented Nov 10, 2024

For Windows, maybe you should check your system variable, and make sure that your CUDNN files have been placed in the right folders.

For example, I'm using Windows 11 with graphics card RTX4060 Laptop, using CUDA 12.6 and it was installed in "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6", so my system variable "Path" should contains these:

  • C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\bin
  • C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\lib\x64
  • C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\libnvvp

And I need to download "CUDNN for CUDA 12.x" from https://developer.nvidia.com/rdp/cudnn-archive , it should be a .zip file. After unpacking, I need to do these steps:

  1. Copy ./cudnn/bin/*.dll to ./NVIDIA GPU Computing Tookit/CUDA/v12.6/bin/
  2. Copy ./cudnn/include/*.dll to ./NVIDIA GPU Computing Tookit/CUDA/v12.6/include/
  3. Copy ./cudnn/lib/x64/*.dll to ./NVIDIA GPU Computing Tookit/CUDA/v12.6/lib/x64/

Then, restart the project, or may need to restart computer for the configuration to take effect.

This is my approach to fixing a similar problem when I encounter it, hoping it can be helpful to you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants