You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a NVIDIA Geforce RTX 4070 with 12GB of Vram, using a conda environment with a python venv environment. I am using nvtop to monitor my vram using, and it's using between 35-40% vram til about epoch 2 or 3, then it jumps up to 100 and goes oom.
Is there a memory leak? or am i doing something wrong that is causing this? I have lowered all the settings to the bare minimum.
There's no information on this repo about how much VRAM this thing requires.
I can't even run a single epoch with 8GB. Immediate OutOfMemoryError.
Google search shows people variously claiming that "finetune" works on 8GB, or that it requires 16GB.
I can run the regular xtts repo and RVC without issue.
This repo, unlike the regular xtts repo, doesn't appear to support the --lowvram argument.
My only next option is to get it set up on Windows, because the driver apparently allows CUDA to share system RAM.
I have a NVIDIA Geforce RTX 4070 with 12GB of Vram, using a conda environment with a python venv environment. I am using nvtop to monitor my vram using, and it's using between 35-40% vram til about epoch 2 or 3, then it jumps up to 100 and goes oom.
Is there a memory leak? or am i doing something wrong that is causing this? I have lowered all the settings to the bare minimum.
Epochs - 6
Batch size - 2
Grad accumulation steps - 2
Seconds 7
I'm using cudatoolkit with 11.8 and CUDNN 8.9.2.26. I'm doing the large v3 model, and it always happens between the 2/3 epoch
The text was updated successfully, but these errors were encountered: