Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: LDSR (Latent Diffusion Super Resolution) does not respect --use-cpu all #4762

Closed
1 task done
chappjo opened this issue Nov 16, 2022 · 4 comments
Closed
1 task done
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@chappjo
Copy link

chappjo commented Nov 16, 2022

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

I have a laptop without a discrete GPU, so I run the Web UI on my cpu using the command line argument "--use-cpu all". This works for Text2Image, Image2Image, ESRGAN, etc, but not for LDSR.

I have managed to fix this in a hacky way, and it no longer produces the error. I still need to wait for the upscale I'm running to finish, so I can check if the output is correct. On my cpu it takes nearly 2 hours to 4x upscale a 640x512 image :|

To fix this I opened ldsr_model_arch.py and:

  1. Changed line 27 of from model.cuda() to model.to(devices.cpu)
  2. Changed line 148 of from c = c.to(torch.device("cuda")) to c = c.to(torch.device("cpu"))
  3. Added new line from modules import devices as devices (the as devices is probably unnecessary)

Obviously this is not a proper fix, but if anyone else is experiencing the same issue, then can use this hack.

Steps to reproduce the problem

  1. Install CPU torch
  2. Add --use-cpu all to COMMANDLINE_ARGS
  3. Text2Image, Image2Image, ESRGAN, etc should work fine
  4. Trying to upscale with LDSR gives error "AssertionError: Torch not compiled with CUDA enabled"

What should have happened?

LDSR should work with "--use-cpu all" in the COMMANDLINE_ARGS

Commit where the problem happens

98947d1

What platforms do you use to access UI ?

Linux

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

--skip-torch-cuda-test --no-half --use-cpu all --medvram

Additional information, context and logs

Error completing request
Arguments: (0, 0, <PIL.Image.Image image mode=RGB size=640x512 at 0x7F4A63EE66B0>, None, '', '', True, 0, 0, 0, 2, 512, 512, True, 3, 0, 1, False) {}
Traceback (most recent call last):
File "/home/pc/programs/linux/stable-diffusion-webui/modules/ui.py", line 185, in f
res = list(func(*args, **kwargs))
File "/home/pc/programs/linux/stable-diffusion-webui/webui.py", line 54, in f
res = func(*args, **kwargs)
File "/home/pc/programs/linux/stable-diffusion-webui/modules/extras.py", line 187, in run_extras
image, info = op(image, info)
File "/home/pc/programs/linux/stable-diffusion-webui/modules/extras.py", line 148, in run_upscalers_blend
res = upscale(image, *upscale_args)
File "/home/pc/programs/linux/stable-diffusion-webui/modules/extras.py", line 116, in upscale
res = upscaler.scaler.upscale(image, resize, upscaler.data_path)
File "/home/pc/programs/linux/stable-diffusion-webui/modules/upscaler.py", line 64, in upscale
img = self.do_upscale(img, selected_model)
File "/home/pc/programs/linux/stable-diffusion-webui/modules/ldsr_model.py", line 54, in do_upscale
return ldsr.super_resolution(img, ddim_steps, self.scale)
File "/home/pc/programs/linux/stable-diffusion-webui/modules/ldsr_model_arch.py", line 87, in super_resolution
model = self.load_model_from_config(half_attention)
File "/home/pc/programs/linux/stable-diffusion-webui/modules/ldsr_model_arch.py", line 27, in load_model_from_config
model.cuda()
File "/home/pc/programs/linux/stable-diffusion-webui/venv/lib/python3.10/site-packages/pytorch_lightning/core/mixins/device_dtype_mixin.py", line 128, in cuda
device = torch.device("cuda", torch.cuda.current_device())
File "/home/pc/programs/linux/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/init.py", line 552, in current_device
_lazy_init()
File "/home/pc/programs/linux/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/init.py", line 221, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

@chappjo chappjo added the bug-report Report of a bug, yet to be confirmed label Nov 16, 2022
@chappjo
Copy link
Author

chappjo commented Nov 16, 2022

Ok, the upscale finished, and the output looks correct, so the hack fix does work.
00003

@krummrey
Copy link

Calling cuda directly makes it hard to switch to cpu or mps. I guess there is a general DEVICE variable set somewhere. Can we use that in those two instances?

@wywywywy
Copy link
Contributor

I've created a new PR #5586 to address this.

Can you give it a try please?

@wywywywy
Copy link
Contributor

The PR has been merged now. Please give it a test.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests

4 participants