Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Can't use any lora since last pull #12464

Closed
1 task done
OzerSkyyy opened this issue Aug 10, 2023 · 7 comments
Closed
1 task done

[Bug]: Can't use any lora since last pull #12464

OzerSkyyy opened this issue Aug 10, 2023 · 7 comments
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@OzerSkyyy
Copy link

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

I git pulled today, and I can't use any lora I have, I always get this error:
RuntimeError: mat1 and mat2 shapes cannot be multiplied (77x3072 and 768x128)
Numbers in parenthesis change when I change lora/models

Steps to reproduce the problem

  1. Go to .... the webui, txt2img
  2. Press .... a lora I want to use
  3. Generate

What should have happened?

A simple image generation corresponding the lora I'm using

Version or Commit where the problem happens

1.5.1

What Python version are you running on ?

Python 3.10.x

What platforms do you use to access the UI ?

Windows

What device are you running WebUI on?

Nvidia GPUs (RTX 20 above)

Cross attention optimization

sdp

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

--xformers --no-half-vae

List of extensions

Dynamic prompts
a1111-sd-webui-locon
deforum-for-automatic1111-webui
model_preset_manager
sd-webui-additional-networks
sd-webui-controlnet
sd_civitai_extension
stable-diffusion-webui-model-toolkit
stable-diffusion-webui-pixelization
stable-diffusion-webui-promptgen
stable-diffusion-webui-sonar
stable-diffusion-webui-wd14-tagger
stable-diffusion-webui-wildcards
weight_gradient

Console logs

venv "E:\AI\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.5.1
Commit hash: 68f336bd994bed5442ad95bad6b6ad5564a5409a




#######################################################################################################
Initializing Civitai Link
If submitting an issue on github, please provide the below text for debugging purposes:

Python revision: 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Civitai Link revision: 68a419d1cbf2481fa06ae130d5c0f1e6e7c87f01
SD-WebUI revision: 68f336bd994bed5442ad95bad6b6ad5564a5409a

Checking Civitai Link requirements...
[+] python-socketio[client] version 5.7.2 installed.

#######################################################################################################


Launching Web UI with arguments: --xformers --no-half-vae
Additional Network extension not installed, Only hijack built-in lora
LoCon Extension hijack built-in lora successfully
[AddNet] Updating model hashes...
100%|███████████████████████████████████████████████████████████████████████████████| 73/73 [00:00<00:00, 11220.47it/s]
[AddNet] Updating model hashes...
100%|████████████████████████████████████████████████████████████████████████████████| 73/73 [00:00<00:00, 9726.93it/s]
2023-08-10 23:57:49,167 - ControlNet - INFO - ControlNet v1.1.234
ControlNet preprocessor location: E:\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2023-08-10 23:57:49,436 - ControlNet - INFO - ControlNet v1.1.234
Civitai: API loaded
Loading weights [5493a0ec49] from E:\AI\stable-diffusion-webui\models\Stable-diffusion\aom3a1b.safetensors
Creating model from config: E:\AI\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
*Deforum ControlNet support: enabled*
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Civitai: Check resources for missing preview images
Startup time: 55.3s (launcher: 25.4s, import torch: 10.0s, import gradio: 4.8s, setup paths: 3.8s, other imports: 2.9s, setup codeformer: 0.2s, list SD models: 0.2s, load scripts: 5.0s, create ui: 2.3s, gradio launch: 0.5s).
Civitai: Found 25 resources missing preview images
Civitai: Found 1 hash matches
Civitai: Updated 0 preview images
Loading VAE weights specified in settings: E:\AI\stable-diffusion-webui\models\VAE\klF8Anime2VAE_klF8Anime2VAE.ckpt
Applying attention optimization: sdp... done.
Model loaded in 5.3s (load weights from disk: 0.3s, create model: 1.8s, apply weights to model: 1.0s, apply half(): 0.7s, load VAE: 0.4s, move model to device: 1.1s).
Traceback (most recent call last):
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
    output = await app.get_blocks().process_api(
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1321, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1159, in preprocess_data
    self.validate_inputs(fn_index, inputs)
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1146, in validate_inputs
    raise ValueError(
ValueError: An event handler (ui_controlnet_unit_for_input_mode) didn't receive enough input values (needed: 21, got: 1).
Check if the event handler calls a Javascript function, and make sure its return value is correct.
Wanted inputs:
    [state, state, state, state, checkbox, checkbox, image, checkbox, dropdown, dropdown, slider, image, radio, checkbox, slider, slider, slider, slider, slider, checkbox, radio]
Received inputs:
    [None]
Traceback (most recent call last):
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
    output = await app.get_blocks().process_api(
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1321, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1159, in preprocess_data
    self.validate_inputs(fn_index, inputs)
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1146, in validate_inputs
    raise ValueError(
ValueError: An event handler (UiControlNetUnit) didn't receive enough input values (needed: 20, got: 1).
Check if the event handler calls a Javascript function, and make sure its return value is correct.
Wanted inputs:
    [state, state, state, checkbox, checkbox, image, checkbox, dropdown, dropdown, slider, image, radio, checkbox, slider, slider, slider, slider, slider, checkbox, radio]
Received inputs:
    [None]
Traceback (most recent call last):
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
    output = await app.get_blocks().process_api(
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1321, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1159, in preprocess_data
    self.validate_inputs(fn_index, inputs)
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1146, in validate_inputs
    raise ValueError(
ValueError: An event handler (ui_controlnet_unit_for_input_mode) didn't receive enough input values (needed: 21, got: 1).
Check if the event handler calls a Javascript function, and make sure its return value is correct.
Wanted inputs:
    [state, state, state, state, checkbox, checkbox, image, checkbox, dropdown, dropdown, slider, image, radio, checkbox, slider, slider, slider, slider, slider, checkbox, radio]
Received inputs:
    [None]
Traceback (most recent call last):
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
    output = await app.get_blocks().process_api(
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1321, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1159, in preprocess_data
    self.validate_inputs(fn_index, inputs)
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1146, in validate_inputs
    raise ValueError(
ValueError: An event handler (UiControlNetUnit) didn't receive enough input values (needed: 20, got: 1).
Check if the event handler calls a Javascript function, and make sure its return value is correct.
Wanted inputs:
    [state, state, state, checkbox, checkbox, image, checkbox, dropdown, dropdown, slider, image, radio, checkbox, slider, slider, slider, slider, slider, checkbox, radio]
Received inputs:
    [None]
Traceback (most recent call last):
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
    output = await app.get_blocks().process_api(
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1321, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1159, in preprocess_data
    self.validate_inputs(fn_index, inputs)
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1146, in validate_inputs
    raise ValueError(
ValueError: An event handler (ui_controlnet_unit_for_input_mode) didn't receive enough input values (needed: 21, got: 1).
Check if the event handler calls a Javascript function, and make sure its return value is correct.
Wanted inputs:
    [state, state, state, state, checkbox, checkbox, image, checkbox, dropdown, dropdown, slider, image, radio, checkbox, slider, slider, slider, slider, slider, checkbox, radio]
Received inputs:
    [None]
Traceback (most recent call last):
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
    output = await app.get_blocks().process_api(
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1321, in process_api
    inputs = self.preprocess_data(fn_index, inputs, state)
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1159, in preprocess_data
    self.validate_inputs(fn_index, inputs)
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1146, in validate_inputs
    raise ValueError(
ValueError: An event handler (UiControlNetUnit) didn't receive enough input values (needed: 20, got: 1).
Check if the event handler calls a Javascript function, and make sure its return value is correct.
Wanted inputs:
    [state, state, state, checkbox, checkbox, image, checkbox, dropdown, dropdown, slider, image, radio, checkbox, slider, slider, slider, slider, slider, checkbox, radio]
Received inputs:
    [None]
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\uvicorn\protocols\websockets\websockets_impl.py", line 247, in run_asgi
    result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 84, in __call__
    return await self.app(scope, receive, send)
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\fastapi\applications.py", line 273, in __call__
    await super().__call__(scope, receive, send)
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 149, in __call__
    await self.app(scope, receive, send)
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\cors.py", line 76, in __call__
    await self.app(scope, receive, send)
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 26, in __call__
    await self.app(scope, receive, send)
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
    raise exc
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in __call__
    raise e
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in __call__
    await self.app(scope, receive, send)
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 341, in handle
    await self.app(scope, receive, send)
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 82, in app
    await func(session)
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 289, in app
    await dependant.call(**values)
  File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 546, in join_queue
    if blocks.dependencies[event.fn_index].get("every", 0):
IndexError: list index out of range
*** Error completing request
*** Arguments: ('task(2fppyl9wswap0my)', ' <lora:acid 3:1>', '', [], 29, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], <gradio.routes.Request object at 0x000001E927B6C670>, 0, True, False, 1, False, False, False, 1.1, 1.5, 100, 0.7, False, False, True, False, False, 0, 'Gustavosta/MagicPrompt-Stable-Diffusion', '', False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001E927B6F970>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001E927B6C2E0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x000001E927B6C1F0>, False, False, False, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, None, None, False, None, None, False, None, None, False, 50, False, False, 'Euler a', 0.95, 0.75, '0.75:0.95:5', '0.2:0.8:5', 'zero', 'pos', 'linear', 0.2, 0.0, 0.75, None, 'Lanczos', 1, 0, 0) {}
    Traceback (most recent call last):
      File "E:\AI\stable-diffusion-webui\modules\call_queue.py", line 58, in f
        res = list(func(*args, **kwargs))
      File "E:\AI\stable-diffusion-webui\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "E:\AI\stable-diffusion-webui\modules\txt2img.py", line 62, in txt2img
        processed = processing.process_images(p)
      File "E:\AI\stable-diffusion-webui\modules\processing.py", line 677, in process_images
        res = process_images_inner(p)
      File "E:\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "E:\AI\stable-diffusion-webui\modules\processing.py", line 783, in process_images_inner
        p.setup_conds()
      File "E:\AI\stable-diffusion-webui\modules\processing.py", line 1191, in setup_conds
        super().setup_conds()
      File "E:\AI\stable-diffusion-webui\modules\processing.py", line 364, in setup_conds
        self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, self.steps * self.step_multiplier, [self.cached_uc], self.extra_network_data)
      File "E:\AI\stable-diffusion-webui\modules\processing.py", line 353, in get_conds_with_caching
        cache[1] = function(shared.sd_model, required_prompts, steps)
      File "E:\AI\stable-diffusion-webui\modules\prompt_parser.py", line 163, in get_learned_conditioning
        conds = model.get_learned_conditioning(texts)
      File "E:\AI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
        c = self.cond_stage_model(c)
      File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\AI\stable-diffusion-webui\modules\sd_hijack_clip.py", line 234, in forward
        z = self.process_tokens(tokens, multipliers)
      File "E:\AI\stable-diffusion-webui\modules\sd_hijack_clip.py", line 271, in process_tokens
        z = self.encode_with_transformers(tokens)
      File "E:\AI\stable-diffusion-webui\modules\sd_hijack_clip.py", line 324, in encode_with_transformers
        outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
      File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 811, in forward
        return self.text_model(
      File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 721, in forward
        encoder_outputs = self.encoder(
      File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 650, in forward
        layer_outputs = encoder_layer(
      File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 389, in forward
        hidden_states = self.mlp(hidden_states)
      File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 344, in forward
        hidden_states = self.fc1(hidden_states)
      File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\AI\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\..\..\..\extensions-builtin/Lora\networks.py", line 357, in network_Linear_forward
        return network_forward(self, input, torch.nn.Linear_forward_before_network)
      File "E:\AI\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\..\..\..\extensions-builtin/Lora\networks.py", line 345, in network_forward
        y = module.forward(y, input)
      File "E:\AI\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\..\..\..\extensions-builtin/Lora\network_lora.py", line 84, in forward
        return y + self.up_model(self.down_model(x)) * self.multiplier() * self.calc_scale()
      File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "E:\AI\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\..\..\..\extensions-builtin/Lora\networks.py", line 357, in network_Linear_forward
        return network_forward(self, input, torch.nn.Linear_forward_before_network)
      File "E:\AI\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\..\..\..\extensions-builtin/Lora\networks.py", line 337, in network_forward
        y = original_forward(module, input)
      File "E:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
        return F.linear(input, self.weight, self.bias)
    RuntimeError: mat1 and mat2 shapes cannot be multiplied (77x3072 and 768x32)

---

Additional information

No response

@OzerSkyyy OzerSkyyy added the bug-report Report of a bug, yet to be confirmed label Aug 10, 2023
@xSinStarx
Copy link
Contributor

a1111-sd-webui-locon is depreciated. Try deleting the folder if you don't specifically need it.

https://github.com/KohakuBlueleaf/a1111-sd-webui-locon#deprecated-a1111-sd-webui-locon-deprecated

@JaimeBorondo
Copy link

Duplicate of The issue I posted a couple weeks ago

There's a ....workaround? fix? found by user aunymoons for this.
You need to disable the following setting under "Compatibility": Lora/Networks: use old method that takes longer when you have multiple Loras active and produces same results as kohya-ss/sd-webui-additional-networks extension

@catboxanon
Copy link
Collaborator

Closing as duplicate of #12104

@INkorPen
Copy link

INkorPen commented Aug 11, 2023

I also had the same situation in https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/v1.5.1/extensions-builtin/Lora/networks.py
I solved it with the following code

def network_forward(module, input, original_forward):
    if len(loaded_networks) == 0:
        return original_forward(module, input)

    input = devices.cond_cast_unet(input)

    network_restore_weights_from_backup(module)
    network_reset_cached_weight(module)

    y = original_forward(module, input)

    network_layer_name = getattr(module, 'network_layer_name', None)
    for lora in loaded_networks:
        module = lora.modules.get(network_layer_name, None)
        if module is None:
            continue

        #y = module.forward(y, input)   # Stable Diffusion WebUI v1.5.1
        y = module.forward(input, y)     # <-- fix here

    return y

@catboxanon
Copy link
Collaborator

catboxanon commented Aug 11, 2023

My PR that was merged earlier uses that fix.

@OzerSkyyy
Copy link
Author

OzerSkyyy commented Aug 11, 2023

I Fixed it by, indeed, deleting the locon extension and unchecking the "Lora/Networks: use old method that takes longer when you have multiple Loras active and produces same results as kohya-ss/sd-webui-additional-networks extension" in the settings, thank u all

@catboxanon
Copy link
Collaborator

That just works around the issue -- for users that do need to use that option it didn't work until my PR (#12466) fixed it. See #12104 (comment) for reference.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests

5 participants