Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Control Net Reference Only does not work anymore #1571

Closed
1 task done
ChrisKorz opened this issue Jun 6, 2023 · 25 comments
Closed
1 task done

Control Net Reference Only does not work anymore #1571

ChrisKorz opened this issue Jun 6, 2023 · 25 comments
Labels
help wanted Extra attention is needed

Comments

@ChrisKorz
Copy link

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

What happened?

Hello, I don't understand, ControlNet Reference Only is not working at all anymore, even though it used to work perfectly fine before. I don't get any error messages, it's just that nothing happens. I click on "Enable" and also "Pixel perfect" just like I used to, but nothing changes. However, it used to work very well before, and I was very satisfied with the result.

I have already uninstalled/reinstalled StableDiffusion and ControlNet, but it still doesn't work.

Steps to reproduce the problem

  1. Got to WebUI
  2. Put an image into ControlNet
  3. Select ReferenceOnly
  4. Check "Enable" and "Pixel Perfect"
  5. Press "Generate"

What should have happened?

Reference Only should work and take the image as reference (as it dit before)

Commit where the problem happens

webui: automatic1111
controlnet: 1b2aa4a9

What browsers do you use to access the UI ?

Brave

Command Line Arguments

--theme dark --xformers

List of enabled extensions

sd-webui-controlnet
LDSR
Lora
ScuNET
SwinIR
prompt-bracket-checker

Console logs

Error running process: C:\Users\Shadow\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py
Traceback (most recent call last):
  File "C:\Users\Shadow\stable-diffusion-webui\modules\scripts.py", line 451, in process
    script.process(p, *script_args)
  File "C:\Users\Shadow\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 676, in process
    model_net = Script.load_control_model(p, unet, unit.model, unit.low_vram)
  File "C:\Users\Shadow\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 260, in load_control_model
    model_net = Script.build_control_model(p, unet, model, lowvram)
  File "C:\Users\Shadow\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 270, in build_control_model
    raise RuntimeError(f"You have not selected any ControlNet Model.")
RuntimeError: You have not selected any ControlNet Model.

Additional information

No response

@LFWarsen
Copy link

LFWarsen commented Jun 6, 2023

You need to select a model (and preprocessor), for instance OpenPose, depth, canny, lineart, etc. The image you put into ControlNet will then be converted to an image it can actually use.

@lllyasviel
Copy link
Collaborator

cannot reproduce. reference works without problem on 4 different devices with 12 different configuations.
let us know if anyone else has this problem.

@lllyasviel lllyasviel added the help wanted Extra attention is needed label Jun 7, 2023
@aaronkingdom
Copy link

yes, me too.... got the same problem
#1280 (comment)

@lllyasviel
Copy link
Collaborator

lllyasviel commented Jun 7, 2023

Hello, please try the below steps and see if problems are solved:

  1. update A1111 and CN to latest version.
  2. disable all other A1111 extensions.
  3. disable all browser extensions or change another browser.
  4. remove all cmd flags in A1111.
  5. delete "config.json" and "ui-config.json". (dangerous, remember to backup files)

The above steps can solve 99% problems. After your problem is solved, you can begin to enable other extensions one-by-one or add A1111 cmd flags one-by-one to find out what is causing problems and let us know.

If you are the unlucky 1%, you can try to use "git checkout" to find out if previous versions of CN works for you, and if that is the case, please let us know your last working version so that we can debug the problem.

If you still cannot solve the problem, please give as detailed as possible info about your environment.

@aaronkingdom
Copy link

omg, im the 1%............
my working environment:

ControlNet v1.1.219

version: v1.3.2
python: 3.10.9
torch: 2.0.1+cu118
xformers: N/A
gradio: 3.32
checkpoint: fc2511737a

--opt-sdp-no-mem-attention --api --device-id=1 --autolaunch --opt-channelslast --update-all-extensions --precision=autocast

try to use the pre version

@aaronkingdom
Copy link

omg, im the 1%............ my working environment:

ControlNet v1.1.219

version: v1.3.2 python: 3.10.9 torch: 2.0.1+cu118 xformers: N/A gradio: 3.32 checkpoint: fc2511737a

--opt-sdp-no-mem-attention --api --device-id=1 --autolaunch --opt-channelslast --update-all-extensions --precision=autocast

try to use the pre version

it show me error:
RuntimeError: shape '[2, -1, 8, 40]' is invalid for input of size 24640
Time taken: 2.85sTorch active/reserved: 20070/20370 MiB, Sys VRAM: 6178/22528 MiB (27.42%)

@aaronkingdom
Copy link

omg, im the 1%............ my working environment:
ControlNet v1.1.219
version: v1.3.2 python: 3.10.9 torch: 2.0.1+cu118 xformers: N/A gradio: 3.32 checkpoint: fc2511737a
--opt-sdp-no-mem-attention --api --device-id=1 --autolaunch --opt-channelslast --update-all-extensions --precision=autocast
try to use the pre version

it show me error: RuntimeError: shape '[2, -1, 8, 40]' is invalid for input of size 24640 Time taken: 2.85sTorch active/reserved: 20070/20370 MiB, Sys VRAM: 6178/22528 MiB (27.42%)

just reference_xx not working

@aaronkingdom
Copy link

still cant working............

venv "D:\AI\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Version: v1.3.2
Commit hash: baf6946e06249c5af9851c60171692c44ef633e0
Installing requirements

Launching Web UI with arguments: --device-id=1 --autolaunch
No module 'xformers'. Proceeding without it.
2023-06-07 15:50:24,172 - ControlNet - INFO - ControlNet v1.1.219
ControlNet preprocessor location: D:\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2023-06-07 15:50:24,285 - ControlNet - INFO - ControlNet v1.1.219
Loading weights [fc2511737a] from D:\AI\stable-diffusion-webui\models\Stable-diffusion\chilloutmix_NiPrunedFp32Fix.safetensors
Creating model from config: D:\AI\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Startup time: 9.0s (import torch: 2.1s, import gradio: 1.4s, import ldm: 0.5s, other imports: 1.3s, list SD models: 0.1s, load scripts: 2.0s, create ui: 0.5s, gradio launch: 0.9s).
Applying optimization: Doggettx... done.
Textual inversion embeddings loaded(25): awaitingtongue, bad-artist, bad-artist-anime, bad-hands-5, bad-picture-chill-75v, bad_prompt_version2, badhandv4, bhands-neg, chunli_alpha, corneo_brigitte, corneo_covering_breasts_two_hands, corneo_dva, corneo_pov_anal, EasyNegative, fenn_mei, lucy_kushinadaV2, myshoes1, ohmyfrick_yogapuss, pureerosface_v1, spreadassms, style-miaozu-20000, style-psycho, ulzzang-6500, was-gunpla, yaguru magiku
Textual inversion embeddings skipped(4): 21charturnerv2, DaveSpaceFour, dblx, painted_landscape
Model loaded in 7.9s (load weights from disk: 0.7s, create model: 0.5s, apply weights to model: 2.6s, apply half(): 0.7s, move model to device: 2.0s, load textual inversion embeddings: 1.3s).
2023-06-07 15:51:04,705 - ControlNet - INFO - Loading preprocessor: reference_only
2023-06-07 15:51:04,705 - ControlNet - INFO - Pixel Perfect Computation:
2023-06-07 15:51:04,705 - ControlNet - INFO - resize_mode = ResizeMode.INNER_FIT
2023-06-07 15:51:04,705 - ControlNet - INFO - raw_H = 512
2023-06-07 15:51:04,706 - ControlNet - INFO - raw_W = 512
2023-06-07 15:51:04,706 - ControlNet - INFO - target_H = 512
2023-06-07 15:51:04,706 - ControlNet - INFO - target_W = 512
2023-06-07 15:51:04,706 - ControlNet - INFO - estimation = 512.0
2023-06-07 15:51:04,706 - ControlNet - INFO - preprocessor resolution = 512
0%| | 0/20 [00:00<?, ?it/s]ControlNet used torch.float16 VAE to encode torch.Size([1, 4, 64, 64]).
25%|████████████████████▊ | 5/20 [00:08<00:24, 1.63s/it]
Error completing request█████████████▊ | 5/20 [00:02<00:08, 1.82it/s]
Arguments: ('task(0700dlcsncmaydi)', '1girl', '', [], 20, 0, True, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], 0, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001B074581DB0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001AF56DB0B50>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001B0741398A0>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "D:\AI\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "D:\AI\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\modules\txt2img.py", line 57, in txt2img
processed = processing.process_images(p)
File "D:\AI\stable-diffusion-webui\modules\processing.py", line 610, in process_images
res = process_images_inner(p)
File "D:\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "D:\AI\stable-diffusion-webui\modules\processing.py", line 728, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "D:\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 295, in process_sample
return process.sample_before_CN_hack(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\modules\processing.py", line 976, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "D:\AI\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 383, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "D:\AI\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 257, in launch_sampling
return func()
File "D:\AI\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 383, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 137, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict([cond_in], image_cond_in))
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "D:\AI\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "D:\AI\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 630, in forward_webui
return forward(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 533, in forward
outer.original_forward(
File "D:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
h = module(h, emb, context)
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
x = layer(x, context)
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
x = block(x, context=context[i])
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
File "D:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "D:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "D:\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\hook.py", line 666, in hacked_basic_transformer_inner_forward
x = self.attn2(self.norm2(x), context=context) + x
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\AI\stable-diffusion-webui\modules\sd_hijack_optimizations.py", line 247, in split_cross_attention_forward
s1 = einsum('b i d, b j d -> b i j', q[:, i:end], k)
File "D:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\functional.py", line 378, in einsum
return _VF.einsum(equation, operands) # type: ignore[attr-defined]
RuntimeError: einsum(): subscript b has size 8 for operand 1 which does not broadcast with previously seen size 16

@lllyasviel
Copy link
Collaborator

Config corrupted. backup and delete "config.json" and "ui-config.json" and try again.

@aaronkingdom
Copy link

aaronkingdom commented Jun 7, 2023

Config corrupted. backup and delete "config.json" and "ui-config.json" and try again.

oh, you are my god :) // beautiful answer, this is working for me now :)))))))) im waiting for almost 1 day... thank you very very much

@szriru
Copy link

szriru commented Jun 7, 2023

With pixel perfect enabled, output image goes white.
I don't know it is exactly related to this issue but i'm letting you know without opening new issue.

@alenknight
Copy link

similar to others... i'm finding that with reference only & pixel perfect... i don't get the counter / timer running... if i use anything else... it works.
also... i can use controlnet1 with reference only. it's when i use 2+ images both as reference only... then it occurs.

@ChrisKorz
Copy link
Author

ChrisKorz commented Jun 8, 2023

Hi! Thanks all for the answers, @lllyasviel thanks for the method, it almost worked, I had also to manually add the "gradio" folder to User/Appdata/Local/Temp folder. As of today, it works. Thank you very much. Hope everyone will have the issue solved.

@bxclib2
Copy link

bxclib2 commented Jun 18, 2023

i am facing the exact same issue in google colab with this notebook
https://colab.research.google.com/github/camenduru/stable-diffusion-webui-colab/blob/main/stable/stable_diffusion_webui_colab.ipynb

All other control net works, just reference only does not work. It generate image without reference.

In colab command line it shows
RuntimeError: You have not selected any ControlNet Model.

@bxclib2
Copy link

bxclib2 commented Jun 18, 2023

It seems it is a ui bug. I restarted and it sometimes works now. But with a very high probability it doesn't work. Can someone just have a try and fix? I revert to the version v1.1.156, the problem solved. Seems it is the bug of the new model selection UI. It is a very easy reproduce bug.

@Stonepreheim
Copy link

Happening to me as well, reference only keeps generating crazy hellscapes of human flesh.

@lcmiracle
Copy link

After some testing using instructions @lllyasviel gave in the above, I still cannot use references (any), but only with "Preview as Input" selected, as far as I can tell, without this radio box checked, the model is generating as normal using references due to the result's pose and form somewhat resembles the original image compared to with the option checked, and that it doesn't generate the error output
*** Error running process: E:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py
Traceback (most recent call last):
File "E:\stable-diffusion-webui\modules\scripts.py", line 519, in process
script.process(p, *script_args)
File "E:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 731, in process
model_net = Script.load_control_model(p, unet, unit.model, unit.low_vram)
File "E:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 292, in load_control_model
model_net = Script.build_control_model(p, unet, model, lowvram)
File "E:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\controlnet.py", line 302, in build_control_model
raise RuntimeError(f"You have not selected any ControlNet Model.")
RuntimeError: You have not selected any ControlNet Model.

I've deleted all other extra extensions except for Lycoris, the following are all the extensions in the extension tab

Extension | URL | Branch | Version | Date | Update
a1111-sd-webui-lycoris
sd-webui-controlnet
LDSR
Lora
ScuNET
SwinIR
canvas-zoom-and-pan
extra-options-section
mobile
prompt-bracket-checker

Also i2i reference seems over-burnt, but then again, I've never used i2i reference before

@shashanksangu
Copy link

Download the model (.pth) files from here: https://huggingface.co/lllyasviel/ControlNet-v1-1/tree/main

Place it inside: stable-diffusion-webui\extensions\sd-webui-controlnet\models

Select respective models on runtime.

@lcmiracle-yh
Copy link

Download the model (.pth) files from here: https://huggingface.co/lllyasviel/ControlNet-v1-1/tree/main

Place it inside: stable-diffusion-webui\extensions\sd-webui-controlnet\models

Select respective models on runtime.

Reference does not have a dedicated model

@amrithaw15
Copy link

The same thing happened to me. I dunno what the issue was, It was working fine, and then suddenly it didn't take refrence_only. I did all the steps exactly as @lllyasviel wrote.including delete "config.json" and "ui-config.json". Still no luck

@MangoLion
Copy link

MangoLion commented Oct 7, 2023

Confirmed removing ui-config worked for me, make sure to close any open SD browser tabs (not just the server)

@lastYoueven
Copy link

lastYoueven commented Nov 14, 2023

i maked some thing right and the issue was disappeared with:
1.backup and delete "config.json" and "ui-config.json" and try again.
2.https://huggingface.co/lllyasviel/ControlNet-v1-1/tree/main download the model you used in controlnet
restart .

@vixenius
Copy link

disabling hypertile in settings fixed it for me

@NaughtDZ
Copy link

disabling hypertile in settings fixed it for me

OMG,it works!
It seems that many people's similar issue of respect is caused by hypertile?

@vixenius
Copy link

disabling hypertile in settings fixed it for me

OMG,it works! It seems that many people's similar issue of respect is caused by hypertile?

Well, for me, errors started appearing after hypertile was introduced, so i disabled it and errors went away. Glad you got it fixed too. Have a great holydays!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests