Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not enough space in the context's memory pool with ControlNet #178

Closed
daniandtheweb opened this issue Feb 19, 2024 · 5 comments
Closed

Not enough space in the context's memory pool with ControlNet #178

daniandtheweb opened this issue Feb 19, 2024 · 5 comments

Comments

@daniandtheweb
Copy link
Contributor

Whenever I try to use ControlNet with an input image bigger than 512x512 I keep getting ggml_new_object: not enough space in the context's memory pool (needed 18122976, available 16777216).
I'm currently using a HIPBlas build and have plenty of vram available.
Is this expected or is there a way to manually increase the context memory pool in the code?

@fszontagh
Copy link
Contributor

fszontagh commented Feb 20, 2024

Checkout my commit, i did some mod. to avoid this.
6ee1c65

The interesting part of this:
params.mem_size

FYI: i didn't calculated nothing

@daniandtheweb
Copy link
Contributor Author

I checked out that pr and it solves every issue I was having with the context's memory pool, amazing job. I'll be closing this issue then.

@fszontagh
Copy link
Contributor

Thanks, but as i wrote, i didn't calculated nothing, so this modifications "as-is". I tested it with my desktop app at many times (only with CUDA and with 12GB VRAM), and it's working fine, but i'm sceptic with it. So, use with cauction please.
I think to need some automatic pre-math magic to calculate these sizes from the model files if that's possible, and skip using the hardcoded mem_size parameters.

@daniandtheweb
Copy link
Contributor Author

daniandtheweb commented Feb 20, 2024

I still haven't looked carefully at the code so I'm not sure about how it works but maybe some check during compile time may optimize that parameter specifically for the gpu memory (even if that would be quite a bad choice for distributing the program).

@fszontagh
Copy link
Contributor

fszontagh commented Feb 20, 2024

I think it depends on the loaded model file size (eg. lora model file size, controlnet model file size etc..), that's need to be fit in there.
I tested earlier some lora what i used with ComfyUI and the others, and modified the parameters untils my lora models fit in and don't fail at runtime.
So, if we found eg. a lora model file what is large enough, maybe thats will fail too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants