[WSL][DOCKER] Internal: Blas GEMM launch failed ? #2170
Replies: 6 comments 8 replies
-
We've never tested on WSL. Is this using our official training images? At |
Beta Was this translation helpful? Give feedback.
-
The allocated memory makes me think there are other processes using the GPU and this is the problem. Or maybe it's something else that's WSL specific. Hopefully someone else has figured it out and can share tips, but we don't support training on Windows, you might have better luck on a Linux system.
… On 23. Mar 2022, at 23:55, Max Watermolen ***@***.***> wrote:
So on just WSL, installed to a python 3.7 venv it is only using the CPU. Even with the tensorflow-gpu package installed
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.
|
Beta Was this translation helpful? Give feedback.
-
So I just tested this on Kali in docker and I am seeing the same behavior |
Beta Was this translation helpful? Give feedback.
-
Howdy @reuben, just wanted to let ya know that upon building the training image I was able to get that docker image working today on WSL |
Beta Was this translation helpful? Give feedback.
-
I had the same issue, but this time with Ubuntu 18 and a RTX 3090. The issue is that Tensorflow 1.15 is build around CUDA 10 while the RTX 30 serie requires CUDA 11. I finally found a workaround, follow the manual installation from here: Except than before installing the STT with
then edit the setup.py to comment those lines:
then continue the install of 🐸-STT: Now it should work properly with CUDA 11 (I also use the flag --use_allow_growth true) |
Beta Was this translation helpful? Give feedback.
-
I'm having this same problem but with deepspeech. I have a RTX 3080 and I'm trying to fine tune the 0.93 deepspeech image/model. I tried the method you mentioned but it didn't seem to work. @MaxwellDPS @lvialle How did you get past your GPU being dependent on CUDA 11.x but tensorflow-gpu 1.15.4 being dependent on CUDA? |
Beta Was this translation helpful? Give feedback.
-
Howdy,
Just switching from deepspeech, upon launching the docker container and running training I am reciving this.
Notes: Nothing is using the GPU, The GPU is seen in nvidia-smi, this worked w/ deepspeech
Am I doing something wrong? Thanks in advance! :)
Launch command:
Stack trace
Beta Was this translation helpful? Give feedback.
All reactions