Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Error occurred when executing LayeredDiffusionDecodeRGBA #33

Open
Xiahussheng opened this issue Mar 6, 2024 · 7 comments
Open

Comments

@Xiahussheng
Copy link

What happened?

The layer_diffusion_diff_fg workflow you provided is missing the step to generate a transparent image in the last step, I tried to add the LayeredDiffusionDecodeRGBA node by myself but it shows a runtime error, I don't know the reason for that.

[Bug]: Error occurred when executing LayeredDiffusionDecodeRGBA: Sizes of tensors must match except in dimension 1. Expected size 40 but got size 39 for tensor number 1 in the list.
PixPin_2024-03-06_14-26-28

Steps to reproduce the problem

/

What should have happened?

/

Commit where the problem happens

ComfyUI:
ComfyUI-layerdiffuse:

Sysinfo

Error occurred when executing LayeredDiffusionDecodeRGBA:

Sizes of tensors must match except in dimension 1. Expected size 40 but got size 39 for tensor number 1 in the list.

File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-layerdiffusion\layered_diffusion.py", line 160, in decode
image, mask = super().decode(samples, images, sub_batch_size)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-layerdiffusion\layered_diffusion.py", line 136, in decode
self.vae_transparent_decoder.decode_pixel(
File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-layerdiffusion\lib_layerdiffusion\models.py", line 302, in decode_pixel
y = self.estimate_augmented(pixel, latent)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-layerdiffusion\lib_layerdiffusion\models.py", line 278, in estimate_augmented
eps = self.estimate_single_pass(feed_pixel, feed_latent).clip(0, 1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-layerdiffusion\lib_layerdiffusion\models.py", line 249, in estimate_single_pass
y = self.model(pixel, latent)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-layerdiffusion\lib_layerdiffusion\models.py", line 212, in forward
sample = upsample_block(sample, res_samples, emb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\diffusers\models\unet_2d_blocks.py", line 2181, in forward
hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Console logs

/

Workflow json file

fg.json

Additional information

No response

@ana55e
Copy link

ana55e commented Mar 7, 2024

hi the fix is as follows: the empty latent image batch_size should be equal to sub_batch_size in layer diffuse deocde( RGBA)......here in your case.....you should change the empty latent image batch_size to 16 or change sub_batch_size in layer diffuse deocde( RGBA) to 1

@huchenlei
Copy link
Owner

I don't think sub batch size is the issue. This tensor mismatch issue mostly comes from input image size don't match generation target size.

@Xiahussheng
Copy link
Author

hi the fix is as follows: the empty latent image batch_size should be equal to sub_batch_size in layer diffuse deocde( RGBA)......here in your case.....you should change the empty latent image batch_size to 16 or change sub_batch_size in layer diffuse deocde( RGBA) to 1


I changed sub_batch_size in layer diffuse deocde( RGBA) to 1,it didn't work, still have the same error .

@Xiahussheng
Copy link
Author

I don't think sub batch size is the issue. This tensor mismatch issue mostly comes from input image size don't match generation target size.


The size I filled in Empty Latent Image is the same as the size of the image I uploaded. And from the generation results, the image with gray background is normally generated, but the generation of transparent background is an error.

@huchenlei
Copy link
Owner

Can you try to use https://github.com/layerdiffusion/sd-forge-layerdiffuse for the same task? I would like to know whether this issue is ComfyUI-only.

@YaseGar
Copy link

YaseGar commented Mar 8, 2024

i also got following error when try to run the example workflow:

ERROR:root:Traceback (most recent call last):
File "ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "ComfyUI\custom_nodes\ComfyUI-layerdiffusion\layered_diffusion.py", line 170, in decode
image, mask = super().decode(samples, images, sub_batch_size)
File "ComfyUI\custom_nodes\ComfyUI-layerdiffusion\layered_diffusion.py", line 127, in decode
self.vae_transparent_decoder = TransparentVAEDecoder(
File "ComfyUI\custom_nodes\ComfyUI-layerdiffusion\lib_layerdiffusion\models.py", line 241, in init
model = UNet1024(in_channels=3, out_channels=4)
File "python_embeded\lib\site-packages\diffusers\configuration_utils.py", line 636, in inner_init
init(self, *args, **init_kwargs)
File "ComfyUI\custom_nodes\ComfyUI-layerdiffusion\lib_layerdiffusion\models.py", line 130, in init
self.mid_block = UNetMidBlock2D(
TypeError: UNetMidBlock2D.init() got an unexpected keyword argument 'attn_groups'

@Xiahussheng
Copy link
Author

Can you try to use https://github.com/layerdiffusion/sd-forge-layerdiffuse for the same task? I would like to know whether this issue is ComfyUI-only.

yes , I've used it in webui forge and it works fine, but not in comfyui

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants