Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SD XL support #11757

Merged
merged 25 commits into from
Jul 16, 2023
Merged

SD XL support #11757

merged 25 commits into from
Jul 16, 2023

Conversation

AUTOMATIC1111
Copy link
Owner

@AUTOMATIC1111 AUTOMATIC1111 commented Jul 12, 2023

Description

  • uses Stability-AI's repo from https://github.com/Stability-AI/generative-models
  • retains old repo for SD1.x models
  • the biggest change is that for SD XL, conditioning is now a dictionary ({'crossattn': Tensor(2x77x2048), 'vector': Tensor(2x2048)}) instead of a single tensor like it was for SD1 (Tensor(2x77x768))
  • needs --no-half-vae commandline argument
  • generating images works
  • various attention optimization work
  • medvram works, generating a 1024x1024 with medvram takes about 12Gb on my machine - but also works if I set the VRAM limit to 8GB, so should work on 8GB videocards too
  • Tested to produce same (or very close) images as Stability-AI's repo (need to set Random number generator source = CPU in settings)
  • cheap live preview modes work
  • SDXL Loras seem to work. I tested https://civitai.com/models/106582/aogamisdxl and https://civitai.com/models/108448/daiton-sdxl-test.
  • textual inversion should not work - embeddings need to be created specifically for SDXL.
  • train tab will not work.
  • DDIM, PLMS, UniPC samplers do not work for SD XL

This branch has now been merged into dev. If you are on sdxl branch, use git switch sdxl to switch to get latest dev updates.
To get the dev branch in a new webui installation:

git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui
git switch dev
webui-user.bat

Original image:
orig

Generated in webui:
00045-42

Checklist:

@FurkanGozukara

This comment was marked as off-topic.

@aetherwu
Copy link

I am doing Kohya LoRA training atm
but that ui is not my taste. the only better part of it from our UI is queue system

You might just want to try this queue extension for WebUI.
It works seamlessly for me like magic.

https://github.com/ArtVentureX/sd-webui-agent-scheduler

@Dekker3D
Copy link

It would be nice to have a separate "medvram" option for this, I think. When using SD 1.5 based checkpoints, I don't need medvram, but for SDXL I'd need lowvram (if that works yet) because of my 10 gb vram.

@fuchao01
Copy link

checkout b717eb7 but a black image is generated

@fuchao01
Copy link

image

@AUTOMATIC1111
Copy link
Owner Author

Try with --no-half-vae

@Erwin11
Copy link

Erwin11 commented Jul 13, 2023

my 30360laptop only has 6GB vram,it seems unavailable on SD XL 😂

@RoyDingZF
Copy link

It shouldn't require so much VRAM to use SDXL. I have RTX3070 8G and it works well in ComfyUI generating 1024X1024

revert SD2.1 back to use the original repo
add SDXL's force_zero_embeddings to negative prompt
@TomKranenburg
Copy link

It shouldn't require so much VRAM to use SDXL. I have RTX3070 8G and it works well in ComfyUI generating 1024X1024

Amazing. With my lowly 1080 I thought I'd been priced out of this one.

@AUTOMATIC1111
Copy link
Owner Author

AUTOMATIC1111 commented Jul 13, 2023

During generation with --medvram it hovers at 7.1GB used and only jumps to ~12GB when finally making the image using VAE. But I was also able to set the memory limit using torch.cuda.set_per_process_memory_fraction to 8GB and still generate the picture fine, with sdp-no-mem optimization, so it seems ti should work on an 8GB card.

@Lenowin777
Copy link

It runs slow but OK on Comfy with 6gb vram, hopefully improvements will get it to that point on A1111, since I like A1111 quite a bit more for a variety of reasons.

@evanferguson28
Copy link

question: where does refiner fit in this version?

@shadowdoggie
Copy link

shadowdoggie commented Jul 13, 2023

It does seem to work for me with these arguments: "--medvram --no-half-vae", though it is insanely slow compared to comfyUI, and i am assuming the refiner doesn't work yet?

@shadowdoggie
Copy link

Just to clarify and for context:

With comfyUI i had 1.8 IT/s average on my 2080, and with this as of now 1.10 IT/s avg

@AUTOMATIC1111
Copy link
Owner Author

AUTOMATIC1111 commented Jul 13, 2023

shadowdoggie: what cross attention optimization are you using? I get 1.5 it/s on Doggettx, but about 5 it/s with xformers and sdp, for a 1024x1024 image on 3090.

Edit:
I'm sorry I seem to have mis-reported those. I don't know how I got those results, could be a combination of debugging mode and running on pictures of different sizes.

I get 2.5it/s for doggettx and 3.0it/s for sdp-no-mem and xformers generating the picture of the cosmonaut from the first post.

@AUTOMATIC1111
Copy link
Owner Author

evanferguson28: no refiner support yet

@art926
Copy link

art926 commented Jul 13, 2023

shadowdoggie, I don't think you're supposed to use it with lower than 1024x1024 resolution

@shadowdoggie
Copy link

shadowdoggie: what cross attention optimization are you using? I get 1.5 it/s on Doggettx, but about 5 it/s with xformers and sdp, for a 1024x1024 image on 3090.

hmm i dont see an option for xformers, how would i utilize that feature?
image

@FurkanGozukara
Copy link

shadowdoggie: what cross attention optimization are you using? I get 1.5 it/s on Doggettx, but about 5 it/s with xformers and sdp, for a 1024x1024 image on 3090.

this is great speed

comfy ui is like 3 times slower than this on RTX 3090 ti on my machine

@shadowdoggie
Copy link

shadowdoggie, I don't think you're supposed to use it with lower than 1024x1024 resolution

i changed that later, not sure how you possibly seen that comment of mine, cuz i deleted it

@CH-ZH
Copy link

CH-ZH commented Jul 26, 2023

The same issue here.

@slavakurilyak
Copy link

@Stability-AI released two new open models (see Inference for file hashes) for SDXL v1:

  1. SDXL-base-1.0: An improved version over SDXL-base-0.9.
  2. SDXL-refiner-1.0: An improved version over SDXL-refiner-0.9.

@bghira

This comment was marked as outdated.

@seedlord
Copy link

those links are 404. it's not out yet.

they are live

@nlienard
Copy link

nlienard commented Jul 26, 2023

On SDXL breanch: trying to load 1.0 model, but whereas it was working good with my RTX 3060 12GB with 0.9, i got a memory issue while trying to load 1.0
ARG are: --xformers --no-half-vae --medvram
on DEV branch, it works well

@dhwz
Copy link
Contributor

dhwz commented Jul 26, 2023

Guys stop using the outdated sdxl branch and posting to this already merged PR

@wzgrx
Copy link
Contributor

wzgrx commented Jul 27, 2023

imports: 3.1s, setup codeformer: 0.2s, list SD models: 0.2s, load scripts: 9.2s, initialize extra networks: 0.1s, create ui: 2.4s, gradio launch: 3.5s, app_started_callback: 1.3s).
*** Failed reading extension data from Git repository (enhanced-img2img)
Traceback (most recent call last):
File "G:\stable-diffusion-webui\modules\extensions.py", line 79, in do_read_info_from_repo
commit = repo.head.commit
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\refs\symbolic.py", line 226, in _get_commit
obj = self._get_object()
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\refs\symbolic.py", line 219, in _get_object
return Object.new_from_sha(self.repo, hex_to_bin(self.dereference_recursive(self.repo, self.path)))
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\objects\base.py", line 94, in new_from_sha
oinfo = repo.odb.info(sha1)
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\db.py", line 40, in info
hexsha, typename, size = self._git.get_object_header(bin_to_hex(binsha))
File "G:\stable-diffusion-webui\modules\gitpython_hack.py", line 18, in get_object_header
ret = subprocess.check_output(
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 421, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 505, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1154, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1546, in _communicate
raise TimeoutExpired(self.args, orig_timeout)
subprocess.TimeoutExpired: Command '['git', 'cat-file', '--batch-check']' timed out after 2 seconds


*** Failed reading extension data from Git repository (novelai-2-local-prompt)
Traceback (most recent call last):
File "G:\stable-diffusion-webui\modules\extensions.py", line 79, in do_read_info_from_repo
commit = repo.head.commit
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\refs\symbolic.py", line 226, in _get_commit
obj = self._get_object()
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\refs\symbolic.py", line 219, in _get_object
return Object.new_from_sha(self.repo, hex_to_bin(self.dereference_recursive(self.repo, self.path)))
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\objects\base.py", line 94, in new_from_sha
oinfo = repo.odb.info(sha1)
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\db.py", line 40, in info
hexsha, typename, size = self._git.get_object_header(bin_to_hex(binsha))
File "G:\stable-diffusion-webui\modules\gitpython_hack.py", line 18, in get_object_header
ret = subprocess.check_output(
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 421, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 505, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1154, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1546, in _communicate
raise TimeoutExpired(self.args, orig_timeout)
subprocess.TimeoutExpired: Command '['git', 'cat-file', '--batch-check']' timed out after 2 seconds


*** Failed reading extension data from Git repository (openOutpaint-webUI-extension)
Traceback (most recent call last):
File "G:\stable-diffusion-webui\modules\extensions.py", line 79, in do_read_info_from_repo
commit = repo.head.commit
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\refs\symbolic.py", line 226, in _get_commit
obj = self._get_object()
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\refs\symbolic.py", line 219, in _get_object
return Object.new_from_sha(self.repo, hex_to_bin(self.dereference_recursive(self.repo, self.path)))
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\objects\base.py", line 94, in new_from_sha
oinfo = repo.odb.info(sha1)
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\db.py", line 40, in info
hexsha, typename, size = self._git.get_object_header(bin_to_hex(binsha))
File "G:\stable-diffusion-webui\modules\gitpython_hack.py", line 18, in get_object_header
ret = subprocess.check_output(
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 421, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 505, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1154, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1546, in _communicate
raise TimeoutExpired(self.args, orig_timeout)
subprocess.TimeoutExpired: Command '['git', 'cat-file', '--batch-check']' timed out after 2 seconds


*** Failed reading extension data from Git repository (prompt-fusion-extension)
Traceback (most recent call last):
File "G:\stable-diffusion-webui\modules\extensions.py", line 79, in do_read_info_from_repo
commit = repo.head.commit
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\refs\symbolic.py", line 226, in _get_commit
obj = self._get_object()
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\refs\symbolic.py", line 219, in _get_object
return Object.new_from_sha(self.repo, hex_to_bin(self.dereference_recursive(self.repo, self.path)))
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\objects\base.py", line 94, in new_from_sha
oinfo = repo.odb.info(sha1)
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\db.py", line 40, in info
hexsha, typename, size = self._git.get_object_header(bin_to_hex(binsha))
File "G:\stable-diffusion-webui\modules\gitpython_hack.py", line 18, in get_object_header
ret = subprocess.check_output(
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 421, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 505, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1154, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1546, in _communicate
raise TimeoutExpired(self.args, orig_timeout)
subprocess.TimeoutExpired: Command '['git', 'cat-file', '--batch-check']' timed out after 2 seconds


@wzgrx
Copy link
Contributor

wzgrx commented Jul 27, 2023

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
Time taken: 0.0 sec.

@MoonRide303
Copy link

Old models still work fine with webui v1.5.1, but attempts to generate anything with SDXL (command line "--medvram --no-half-vae") end up with this:

---
*** Error completing request
*** Arguments: ('task(c87ks4kyjvgv80z)', 'whatever', '', [], 20, 0, False, False, 1, 1, 7, 0.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], <gradio.routes.Request object at 0x0000020F91E765C0>, 0, False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020EC07177F0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020EE718D3C0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020EC0715780>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020F91DB9870>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020EC0834460>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020EC0CC78B0>, False, False, False, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 20, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, 'Not set', True, True, '', '', '', '', '', 1.3, 'Not set', 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', False, 'None', None, None, False, None, None, False, None, None, False, None, None, False, None, None, False, None, None, False, 50, 10.0, 30.0, True, 0.0, 'Lanczos', 1, 0, 0, 75, 0.0001, 0.0, False, True, False, False) {}
    Traceback (most recent call last):
      File "D:\tools\Stable-Diffusion-web-UI\modules\call_queue.py", line 58, in f
        res = list(func(*args, **kwargs))
      File "D:\tools\Stable-Diffusion-web-UI\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "D:\tools\Stable-Diffusion-web-UI\modules\txt2img.py", line 62, in txt2img
        processed = processing.process_images(p)
      File "D:\tools\Stable-Diffusion-web-UI\modules\processing.py", line 677, in process_images
        res = process_images_inner(p)
      File "D:\tools\Stable-Diffusion-web-UI\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "D:\tools\Stable-Diffusion-web-UI\modules\processing.py", line 783, in process_images_inner
        p.setup_conds()
      File "D:\tools\Stable-Diffusion-web-UI\modules\processing.py", line 1191, in setup_conds
        super().setup_conds()
      File "D:\tools\Stable-Diffusion-web-UI\modules\processing.py", line 364, in setup_conds
        self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, self.steps * self.step_multiplier, [self.cached_uc], self.extra_network_data)
      File "D:\tools\Stable-Diffusion-web-UI\modules\processing.py", line 353, in get_conds_with_caching
        cache[1] = function(shared.sd_model, required_prompts, steps)
      File "D:\tools\Stable-Diffusion-web-UI\extensions\prompt-fusion-extension\lib_prompt_fusion\hijacker.py", line 15, in wrapper
        return function(*args, **kwargs, original_function=self.__original_functions[attribute])
      File "D:\tools\Stable-Diffusion-web-UI\extensions\prompt-fusion-extension\scripts\promptlang.py", line 38, in _hijacked_get_learned_conditioning
        flattened_conds = original_function(model, flattened_prompts, total_steps)
      File "D:\tools\Stable-Diffusion-web-UI\modules\prompt_parser.py", line 163, in get_learned_conditioning
        conds = model.get_learned_conditioning(texts)
      File "D:\tools\Stable-Diffusion-web-UI\modules\sd_models_xl.py", line 24, in get_learned_conditioning
        "original_size_as_tuple": torch.tensor([height, width], **devices_args).repeat(len(batch), 1),
    TypeError: must be real number, not NoneType

---

GPU: RTX 4080.

@Kadah
Copy link

Kadah commented Jul 27, 2023

Disable any extention that hooks or hijacks gen when using SDXL till they are updated.

I found these so far that will cause gen to fail when using SDXL:

  • lycoris (is this still needed with the recent base expanded lora support? Edit: I've seen that its not needed anymore a few times)
  • controlnets (looks like a fix is coming soon)
  • prompt-fusion

@dhwz
Copy link
Contributor

dhwz commented Jul 27, 2023

@Kadah lycoris extension is no longer required and must be removed

@eniora
Copy link

eniora commented Jul 27, 2023

SDXL 1024x1024 is taking just over a minute for me on a mere 1070 8GB, not sure why people keep saying A1111 is slow, on ComfyUI it's slower for me for some reason (a minute and a half on Comfy and a minute and 20 seconds on A1111), both using xformers. Though it's worth mentioning that on A1111 --medvram flag is a must for 8GB or lower cards when using SDXL (otherwise generating 1024x1024 can take 15 mins). @AUTOMATIC1111 can --medvram be enforced for low VRAM (8GB or less) cards (at least only when SDXL is loaded) so people stop complaining about A1111 being slow with SDXL? I think comfy does this automatically that's why you don't see people complaining about it being super slow.

I just wish the refiner process can be semi-automated on A1111, for me personally it's not a big deal because I don't really find the refiner so great TBH, sometimes it can make the image worse while only improving small parts of the image. And I think in the future when SDXL is heavily finetuned and some loras are around the refiner won't really be needed anyway.

Screenshot 2023-07-27 182139

@MoonRide303
Copy link

@Kadah Thx for the hint - I've disabled prompt-fusion-extension, and it started working.

@MoonRide303
Copy link

MoonRide303 commented Jul 27, 2023

@eniora I wanted to check out the refiner model, so I learned and played a bit with ComfyUI today. Proper setup (sampler, steps, denoise strength) might vary image to image, but I find it pretty useful and able to nicely refine output from base model (from subtle changes, to more noticable style change - you can try using different or refined prompts for it). Subtle starting setup you can try is euler_ancestral, 2 steps, denoise 0.1 - it looks like that, then:
image

If I want refiner to have bigger impact, then I increase both denoise and steps for it (denoise 0.25 with 5 steps, denoise 0.5 with 10 steps, etc.). Interesting thing I've just noticed - refiner model is able not just to add the details, but also do stuff like blur background to make it look more like a portrait (without being asked for it in the prompt), like that:
image

@AUTOMATIC1111 It would be really nice to be able to use refiner model similarily in the UI of yours.

@Kadah
Copy link

Kadah commented Jul 27, 2023

Link to refiner request: #11919

I think I'd like to see the refiner implemented similar to HRF, UI wise, and with options to at least save the pre-refiner output (similar to the same option to save the outputs of pre-HRF).

@VladimirNCh
Copy link

SDXL 1024x1024 is taking just over a minute for me on a mere 1070 8GB, not sure why people keep saying A1111 is slow, on ComfyUI it's slower for me for some reason (a minute and a half on Comfy and a minute and 20 seconds on A1111), both using xformers. Though it's worth mentioning that on A1111 --medvram flag is a must for 8GB or lower cards when using SDXL (otherwise generating 1024x1024 can take 15 mins). @AUTOMATIC1111 can --medvram be enforced for low VRAM (8GB or less) cards (at least only when SDXL is loaded) so people stop complaining about A1111 being slow with SDXL? I think comfy does this automatically that's why you don't see people complaining about it being super slow.

I just wish the refiner process can be semi-automated on A1111, for me personally it's not a big deal because I don't really find the refiner so great TBH, sometimes it can make the image worse while only improving small parts of the image. And I think in the future when SDXL is heavily finetuned and some loras are around the refiner won't really be needed anyway.

Скриншот 2023-07-27 182139

I run SDXL_0.9 on a Quadro K620 with 2GB, I manage to do one 512x712 generation, after that webui_user needs to be restarted as there is a constant low memory error. Generation time more than 15 minutes

COMMANDLINE_ARGS= --opt-sub-quad-attention --lowvram --always-batch-cond-uncond --no-half-vae

@chdelacr
Copy link

Old models still work fine with webui v1.5.1, but attempts to generate anything with SDXL (command line "--medvram --no-half-vae") end up with this:

---
*** Error completing request
*** Arguments: ('task(c87ks4kyjvgv80z)', 'whatever', '', [], 20, 0, False, False, 1, 1, 7, 0.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], <gradio.routes.Request object at 0x0000020F91E765C0>, 0, False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020EC07177F0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020EE718D3C0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020EC0715780>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020F91DB9870>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020EC0834460>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020EC0CC78B0>, False, False, False, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 20, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, 'Not set', True, True, '', '', '', '', '', 1.3, 'Not set', 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', False, 'None', None, None, False, None, None, False, None, None, False, None, None, False, None, None, False, None, None, False, 50, 10.0, 30.0, True, 0.0, 'Lanczos', 1, 0, 0, 75, 0.0001, 0.0, False, True, False, False) {}
    Traceback (most recent call last):
      File "D:\tools\Stable-Diffusion-web-UI\modules\call_queue.py", line 58, in f
        res = list(func(*args, **kwargs))
      File "D:\tools\Stable-Diffusion-web-UI\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "D:\tools\Stable-Diffusion-web-UI\modules\txt2img.py", line 62, in txt2img
        processed = processing.process_images(p)
      File "D:\tools\Stable-Diffusion-web-UI\modules\processing.py", line 677, in process_images
        res = process_images_inner(p)
      File "D:\tools\Stable-Diffusion-web-UI\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "D:\tools\Stable-Diffusion-web-UI\modules\processing.py", line 783, in process_images_inner
        p.setup_conds()
      File "D:\tools\Stable-Diffusion-web-UI\modules\processing.py", line 1191, in setup_conds
        super().setup_conds()
      File "D:\tools\Stable-Diffusion-web-UI\modules\processing.py", line 364, in setup_conds
        self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, self.steps * self.step_multiplier, [self.cached_uc], self.extra_network_data)
      File "D:\tools\Stable-Diffusion-web-UI\modules\processing.py", line 353, in get_conds_with_caching
        cache[1] = function(shared.sd_model, required_prompts, steps)
      File "D:\tools\Stable-Diffusion-web-UI\extensions\prompt-fusion-extension\lib_prompt_fusion\hijacker.py", line 15, in wrapper
        return function(*args, **kwargs, original_function=self.__original_functions[attribute])
      File "D:\tools\Stable-Diffusion-web-UI\extensions\prompt-fusion-extension\scripts\promptlang.py", line 38, in _hijacked_get_learned_conditioning
        flattened_conds = original_function(model, flattened_prompts, total_steps)
      File "D:\tools\Stable-Diffusion-web-UI\modules\prompt_parser.py", line 163, in get_learned_conditioning
        conds = model.get_learned_conditioning(texts)
      File "D:\tools\Stable-Diffusion-web-UI\modules\sd_models_xl.py", line 24, in get_learned_conditioning
        "original_size_as_tuple": torch.tensor([height, width], **devices_args).repeat(len(batch), 1),
    TypeError: must be real number, not NoneType

---

GPU: RTX 4080.

Looks like is the neutral prompt extension, just found out the solution on Reddit

@ClashSAN
Copy link
Collaborator

I run SDXL_0.9 on a Quadro K620 with 2GB, I manage to do one 512x712 generation, after that webui_user needs to be restarted as there is a constant low memory error. Generation time more than 15 minutes

@VladimirNCh for larger sizes:
Try using the model with this vae: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix

Without --no-half-vae gives 66% more px

--opt-sub-quad-attention --lowvram

OR

--opt-sdp-no-mem-attention --lowvram

@chdelacr
Copy link

I run SDXL_0.9 on a Quadro K620 with 2GB, I manage to do one 512x712 generation, after that webui_user needs to be restarted as there is a constant low memory error. Generation time more than 15 minutes

@VladimirNCh for larger sizes: Try using the model with this vae: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix

Without --no-half-vae gives 66% more px

--opt-sub-quad-attention --lowvram

OR

--opt-sdp-no-mem-attention --lowvram

Thanks for sharing, I can generate now on a GTX1350 with 4GB 😅. It's pretty slow at launch, but at least it works now...

@ClashSAN
Copy link
Collaborator

@chdelacr, what is your maximum size?

@remystic
Copy link

has anyone been able to run the SDXL model on mac m1? if the answer is yes who can help me with the settings? it generates very random things for me

@MoonRide303
Copy link

MoonRide303 commented Jul 30, 2023

@remystic If it already generates something, then first thing to check would be resolution. If you go with old defaults (512x512) it generates garbage, but it should start generating proper output after changing it to 1024x1024 (or any other compatible resolution - see the Appendix I from the SDXL paper). Aside of that - you can check out the Mac guide for SDXL from Hugging Face (based on diffusers).

@ARDEACT
Copy link

ARDEACT commented Aug 1, 2023

I cannot load the VAEs separate file (VAE file in the folder). I get an error. Without it, SDXL loads just fine.

@eniora
Copy link

eniora commented Aug 1, 2023

@ARDEACT make sure to use sdxl_vae.safetensors and not diffusion_pytorch_model.safetensors
If you're using sdxl_vae.safetensors and still get an error, then we need to see that error to try to help you.

@markrmiller
Copy link

Seems kind of strange but I can’t get anything out of an sdxl model trained with someone. I’ve tried multiple models trained with sd-scripts. Put them in comfy and use the keyword and get the subject. Put them in automatic and use the keyword and it’s the same generic scene type thing you’d get from the base model with nothing trained on that keyword.

@w-e-w
Copy link
Collaborator

w-e-w commented Aug 2, 2023

issues gose to issues tab

Repository owner locked as resolved and limited conversation to collaborators Aug 2, 2023
@AUTOMATIC1111 AUTOMATIC1111 deleted the sdxl branch August 5, 2023 06:34
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.