Replies: 1 comment 7 replies
-
Hey, I don't think it's possible with 24GB. Would you be able to try with 2x48GB with this config https://github.com/axolotl-ai-cloud/axolotl/blob/main/examples/llama-3/fft-8b-liger-fsdp.yaml? |
Beta Was this translation helpful? Give feedback.
7 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi thanks for the tool! I wonder whether it is possible to full finetune a 8B model on 24GB GPU using this tool? (Can have cpu offloading, surely)
Beta Was this translation helpful? Give feedback.
All reactions