-
Notifications
You must be signed in to change notification settings - Fork 6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Colab: Balances the data distribution #1710
Conversation
enables the use of VRAM memory so as not to saturate the system RAM
Thank you for the contribution. While i totally agree with adding |
If you mean advanced->model, no. I kept the default settings. For the two flags, even though I kept them I didn't notice any problems. |
fyi if you do not offload from VRAM and switch models, every additional model will also be kept in VRAM, which is fine. Fooocus/ldm_patched/modules/model_management.py Lines 357 to 359 in 0c4f20a
models are unloaded when more VRAM is required than is free. Please nevertheless test with multiple models and provide your results to make sure everything is working as expected. Looking forward to it :) |
Sorry for the delay. I have tested the models and everything seems to work correctly. Regarding the flags I still have to see, but even if they are both there it doesn't seem to cause any kind of problem |
The solution of this issue has been referenced to and has helped countless users already in enabling them to use Colab. |
okay, thanks for considering this change. best regards. |
Here are my extensive testing results. Tests have been conducted on Colab with a T4 instance (free tier) using 2 IP images (ImagePrompt) and a positive prompt in 1152×896, default model, default styles (irrelevant for test). --always-high-vram --disable-offload-from-vram
Process did NOT run out of memory for first generation, but DID run out of memory when using upscale or different adapters afterwards --always-high-vram --attention-split
Process did NOT run out of memory for first generation, but DID run out of memory when using upscale or different adapters afterwards --always-high-vram --disable-offload-from-vram --attention-split
Process did NOT run out of memory for first generation (but overall slower), but DID run out of memory when using upscale or different adapters afterwards Learnings:
=> using |
@mashb1t I’ll add that the keys |
@nightowL821 You're not using Colab, so please check out https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md and if this doesn't work open a new discussion. |
Analyses
After conducting several tests on Google Colab and reviewing the documentation, initiating "fooocus" with specific flags not only enables the utilization of cloud computing VRAM (15GB of VRAM), significantly reducing the consumption of system RAM, but also enhances processing speed and prevents premature process termination.
What the user notices
By default, VRAM remains unused, and the program processes solely with system memory (12GB). The program crashes when attempting to use more than one image in the prompt. The process was terminated with
^C
.After this commit
Using the extra 15GB of vram allows you to make the most of the program on colab. The use is perfectly balanced and manages to process with 4 images in the prompt.
Thanks for your work, best regards.