-
Notifications
You must be signed in to change notification settings - Fork 27.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SD XL support #11757
SD XL support #11757
Conversation
This comment was marked as off-topic.
This comment was marked as off-topic.
You might just want to try this queue extension for WebUI. https://github.com/ArtVentureX/sd-webui-agent-scheduler |
It would be nice to have a separate "medvram" option for this, I think. When using SD 1.5 based checkpoints, I don't need medvram, but for SDXL I'd need lowvram (if that works yet) because of my 10 gb vram. |
checkout b717eb7 but a black image is generated |
Try with |
my 30360laptop only has 6GB vram,it seems unavailable on SD XL 😂 |
It shouldn't require so much VRAM to use SDXL. I have RTX3070 8G and it works well in ComfyUI generating 1024X1024 |
revert SD2.1 back to use the original repo add SDXL's force_zero_embeddings to negative prompt
Amazing. With my lowly 1080 I thought I'd been priced out of this one. |
During generation with |
It runs slow but OK on Comfy with 6gb vram, hopefully improvements will get it to that point on A1111, since I like A1111 quite a bit more for a variety of reasons. |
question: where does refiner fit in this version? |
It does seem to work for me with these arguments: "--medvram --no-half-vae", though it is insanely slow compared to comfyUI, and i am assuming the refiner doesn't work yet? |
Just to clarify and for context: With comfyUI i had 1.8 IT/s average on my 2080, and with this as of now 1.10 IT/s avg |
shadowdoggie: what cross attention optimization are you using? I get 1.5 it/s on Doggettx, but about 5 it/s with xformers and sdp, for a 1024x1024 image on 3090. Edit: I get 2.5it/s for doggettx and 3.0it/s for sdp-no-mem and xformers generating the picture of the cosmonaut from the first post. |
evanferguson28: no refiner support yet |
shadowdoggie, I don't think you're supposed to use it with lower than 1024x1024 resolution |
hmm i dont see an option for xformers, how would i utilize that feature? |
this is great speed comfy ui is like 3 times slower than this on RTX 3090 ti on my machine |
i changed that later, not sure how you possibly seen that comment of mine, cuz i deleted it |
The same issue here. |
@Stability-AI released two new open models (see Inference for file hashes) for SDXL v1:
|
This comment was marked as outdated.
This comment was marked as outdated.
they are live |
On SDXL breanch: trying to load 1.0 model, but whereas it was working good with my RTX 3060 12GB with 0.9, i got a memory issue while trying to load 1.0 |
Guys stop using the outdated sdxl branch and posting to this already merged PR |
imports: 3.1s, setup codeformer: 0.2s, list SD models: 0.2s, load scripts: 9.2s, initialize extra networks: 0.1s, create ui: 2.4s, gradio launch: 3.5s, app_started_callback: 1.3s). *** Failed reading extension data from Git repository (novelai-2-local-prompt) *** Failed reading extension data from Git repository (openOutpaint-webUI-extension) *** Failed reading extension data from Git repository (prompt-fusion-extension) |
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select) |
Old models still work fine with webui v1.5.1, but attempts to generate anything with SDXL (command line "--medvram --no-half-vae") end up with this:
GPU: RTX 4080. |
Disable any extention that hooks or hijacks gen when using SDXL till they are updated. I found these so far that will cause gen to fail when using SDXL:
|
@Kadah lycoris extension is no longer required and must be removed |
SDXL 1024x1024 is taking just over a minute for me on a mere 1070 8GB, not sure why people keep saying A1111 is slow, on ComfyUI it's slower for me for some reason (a minute and a half on Comfy and a minute and 20 seconds on A1111), both using xformers. Though it's worth mentioning that on A1111 --medvram flag is a must for 8GB or lower cards when using SDXL (otherwise generating 1024x1024 can take 15 mins). @AUTOMATIC1111 can --medvram be enforced for low VRAM (8GB or less) cards (at least only when SDXL is loaded) so people stop complaining about A1111 being slow with SDXL? I think comfy does this automatically that's why you don't see people complaining about it being super slow. I just wish the refiner process can be semi-automated on A1111, for me personally it's not a big deal because I don't really find the refiner so great TBH, sometimes it can make the image worse while only improving small parts of the image. And I think in the future when SDXL is heavily finetuned and some loras are around the refiner won't really be needed anyway. |
@Kadah Thx for the hint - I've disabled prompt-fusion-extension, and it started working. |
@eniora I wanted to check out the refiner model, so I learned and played a bit with ComfyUI today. Proper setup (sampler, steps, denoise strength) might vary image to image, but I find it pretty useful and able to nicely refine output from base model (from subtle changes, to more noticable style change - you can try using different or refined prompts for it). Subtle starting setup you can try is euler_ancestral, 2 steps, denoise 0.1 - it looks like that, then: If I want refiner to have bigger impact, then I increase both denoise and steps for it (denoise 0.25 with 5 steps, denoise 0.5 with 10 steps, etc.). Interesting thing I've just noticed - refiner model is able not just to add the details, but also do stuff like blur background to make it look more like a portrait (without being asked for it in the prompt), like that: @AUTOMATIC1111 It would be really nice to be able to use refiner model similarily in the UI of yours. |
Link to refiner request: #11919 I think I'd like to see the refiner implemented similar to HRF, UI wise, and with options to at least save the pre-refiner output (similar to the same option to save the outputs of pre-HRF). |
I run SDXL_0.9 on a Quadro K620 with 2GB, I manage to do one 512x712 generation, after that webui_user needs to be restarted as there is a constant low memory error. Generation time more than 15 minutes COMMANDLINE_ARGS= --opt-sub-quad-attention --lowvram --always-batch-cond-uncond --no-half-vae |
Looks like is the neutral prompt extension, just found out the solution on Reddit |
@VladimirNCh for larger sizes: Without --no-half-vae gives 66% more px
OR
|
Thanks for sharing, I can generate now on a GTX1350 with 4GB 😅. It's pretty slow at launch, but at least it works now... |
@chdelacr, what is your maximum size? |
has anyone been able to run the SDXL model on mac m1? if the answer is yes who can help me with the settings? it generates very random things for me |
@remystic If it already generates something, then first thing to check would be resolution. If you go with old defaults (512x512) it generates garbage, but it should start generating proper output after changing it to 1024x1024 (or any other compatible resolution - see the Appendix I from the SDXL paper). Aside of that - you can check out the Mac guide for SDXL from Hugging Face (based on diffusers). |
I cannot load the VAEs separate file (VAE file in the folder). I get an error. Without it, SDXL loads just fine. |
@ARDEACT make sure to use sdxl_vae.safetensors and not diffusion_pytorch_model.safetensors |
Seems kind of strange but I can’t get anything out of an sdxl model trained with someone. I’ve tried multiple models trained with sd-scripts. Put them in comfy and use the keyword and get the subject. Put them in automatic and use the keyword and it’s the same generic scene type thing you’d get from the base model with nothing trained on that keyword. |
issues gose to issues tab |
Description
{'crossattn': Tensor(2x77x2048), 'vector': Tensor(2x2048)}
) instead of a single tensor like it was for SD1 (Tensor(2x77x768)
)--no-half-vae
commandline argumentThis branch has now been merged into dev. If you are on sdxl branch, use
git switch sdxl
to switch to get latest dev updates.To get the dev branch in a new webui installation:
Original image:
Generated in webui:
Checklist: