You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your impressive work! However, I trained it on a NVIDIA RTX 3090 with a batch size of 2 in womask_pet.conf, and is still resulting in out-of-memory issues. Is there anything wrong with my configuration parameters? How much memory does the model take with the default batch size of 2048?
Exception has occurred: RuntimeError
CUDA out of memory. Tried to allocate 1.50 GiB (GPU 0; 23.70 GiB total capacity; 20.47 GiB already allocated; 587.56 MiB free; 21.03 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
The text was updated successfully, but these errors were encountered:
Thank you for your impressive work! However, I trained it on a NVIDIA RTX 3090 with a batch size of 2 in womask_pet.conf, and is still resulting in out-of-memory issues. Is there anything wrong with my configuration parameters? How much memory does the model take with the default batch size of 2048?
The text was updated successfully, but these errors were encountered: