You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm testing DetectoRS training using only 1 GPU with 1 image per gpu, 1 worker per gpu and the cuda memory is full already. My GPU has 11GB in memory. How comes a detection models can take this much memory, or is there a chance that I do anything wrong? What is the requirement Cuda memory per gpu for this models?
The text was updated successfully, but these errors were encountered:
I'm testing DetectoRS training using only 1 GPU with 1 image per gpu, 1 worker per gpu and the cuda memory is full already. My GPU has 11GB in memory. How comes a detection models can take this much memory, or is there a chance that I do anything wrong? What is the requirement Cuda memory per gpu for this models?
The text was updated successfully, but these errors were encountered: