Replies: 2 comments
-
You would need to provide more info for that, if you're asking the cheaper GPU you can use, you can finetune SDXL with half precision and using AdamW8bit with a batch of 1 and gradient checkpointing on a single 3090 or 4090 (24 GB VRAM), you can also save to an SSD the image embeds if you're not using augmentation, so you won't need to load the VAE for training. You can go lower if you train a LoRA and then merge it. I think most people are using a single A100 or two A100 to finetune models to maintain a balance between cost and performance if that's what you're asking. |
Beta Was this translation helpful? Give feedback.
-
Thank you very much! |
Beta Was this translation helpful? Give feedback.
-
Could you please tell me what kind of GPU cards and how many days are approximately needed for full-scale fine-tuning of SDXL using 5000 pairs of 1024-length text-image data? Thank you very much!
is that need 40GB gpu memory ?
Beta Was this translation helpful? Give feedback.
All reactions