Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple stability issues #165

Open
Petomai opened this issue Apr 25, 2024 · 0 comments
Open

Multiple stability issues #165

Petomai opened this issue Apr 25, 2024 · 0 comments

Comments

@Petomai
Copy link

Petomai commented Apr 25, 2024

Machine: Generic Laptop
16Gb RAM
Windows 11
8Gb (shared)Vram for Intel UHD graphics.

It doesn't behave the same each time used. First use
I get this error with 512x512 while Task Manager shows my GPU is only using 3.9Gb of 7.9Gb
  RuntimeError: Exception from src\inference\src\core.cpp:116:
[ GENERAL_ERROR ] Exception from src\plugins\intel_gpu\src\runtime\ocl\ocl_engine.cpp:179:
[GPU] out of GPU resources

I've had it gen at 256x256 but it's just random colors and blocks. It gens fast, way faster than CPU. It just doesn't look like anything I prompt.

If I press Generate again after the out of GPU resources error, it will sometimes generate, consuming 3.2Gb/7.9Gb, after the last iteration it will jump to 4.2Gb/7.9Gb and complete successfully. Image looks good! Very fast compared to CPU only.

Press png info, send text2img, generate again and it will 100% CPU usage for 5 minutes 0/20 using no GPU memory. Accelerate with OpenVino is selected still, GPU selected but it's not using it. (maybe send text2img from png info overrides or breaks OpenVino GPU option?

so I try just copy paste the png info. It says "ValueError: prompt_embeds and negative_prompt_embeds must have the same shape when passed directly, but got: prompt_embeds torch.Size([1, 77, 768]) != negative_prompt_embeds torch.Size([1, 154, 768])."

Remove negative propmpts,
Generate again, it does GPU out of resources error again at 3.9/7.9Gb
"BackendCompilerFailed: backend='openvino_fx' raised: RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 2, 77, 768] to have 4 channels, but got 2 channels instead While executing %l__self___conv_in : [num_users=3] = call_module[target=L__self___conv_in](args = (%l_sample_,), kwargs = {}) Original traceback: File "D:\OpenVino\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 1026, in forward sample = self.conv_in(sample) Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information You can suppress this exception and fall back to eager by setting: import torch._dynamo torch._dynamo.config.suppress_errors = True"

Press Generate again, it all generates fast. no problem. Picture looks okay but many problems.

add back negative prompts ""ValueError: prompt_embeds and negative_prompt_embeds must have the same shape when passed directly, but got: prompt_embeds torch.Size([1, 77, 768]) != negative_prompt_embeds torch.Size([1, 154, 768]).""

I selected ESRGAN_4x and it dropped from using GPU to using CPU. It will stop using GPU and switch to CPU on it's own, (even though GPU is still selected in the OpenVino script, it is using CPU.) Selecting CPU and going back to GPU doesn't fix it. It still uses CPU only at this point, the entire program must be reloaded in order to use GPU again. This seems to occur when a different upscaler is selected. The usage of CPU vs GPU can be determined by using Task Manager or other GPU/CPU monitor software.

Pressing "Interrupt" doesn't notify "Interrupting..." but it works anyway. It still interrupts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant