-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hardware requirements #23
Comments
I used A800 80GB GPU |
Thanks for sharing the details. Any idea what is the minimum gpu RAM that
can do it?
Thanks
…On Wed, 11 Dec 2024 at 7:24 AM, zhangfaen ***@***.***> wrote:
I used A800 80GB GPU
—
Reply to this email directly, view it on GitHub
<#23 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AY52JXWCDSG6J4CQFRCBMD32E6LNTAVCNFSM6AAAAABTIUUAQOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMZTGQ2DENZWG4>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
@zhangfaen I have tried to run the 2b model via a A100 80 gb card and still I get oom issues. How is the 2gb model taking that much vram. When I use unsloth instead, I can even run a 7b model(fp16) for less than 50 gb with a batch size of 2 for the same usecase. The reason I tried this was due to the distributed training but even with 4xA100 the training breaks due to 00M issues and unsloth does not support distributed training. |
Hi
I'm facing memory issue.
Can someone who successfully trained the 2B model please reveal what GPU, GPU memory and system memory they had on the PC on which they trained ?
Thanks
The text was updated successfully, but these errors were encountered: