Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: Flux broken (on MPS ?) in 5.4.3.rc2, dtype issue #7422

Closed
1 task done
Vargol opened this issue Dec 3, 2024 · 0 comments · Fixed by #7423
Closed
1 task done

[bug]: Flux broken (on MPS ?) in 5.4.3.rc2, dtype issue #7422

Vargol opened this issue Dec 3, 2024 · 0 comments · Fixed by #7423
Labels
bug Something isn't working

Comments

@Vargol
Copy link
Contributor

Vargol commented Dec 3, 2024

Is there an existing issue for this problem?

  • I have searched the existing issues

Operating system

macOS

GPU vendor

Apple Silicon (MPS)

GPU model

M3 base 10 GPU revision

GPU VRAM

24Gb

Version number

v5.4.3rc2

Browser

Version 18.1.1 (20619.2.8.11.12)

Python dependencies

{
"accelerate": "1.0.1",
"compel": "2.0.2",
"cuda": null,
"diffusers": "0.31.0",
"numpy": "1.26.4",
"opencv": "4.9.0.80",
"onnx": "1.16.1",
"pillow": "10.4.0",
"python": "3.11.10",
"torch": "2.4.1",
"torchvision": "0.19.1",
"transformers": "4.46.3",
"xformers": null
}

What happened

Running a Flux Render fails with

[2024-12-03 11:57:12,518]::[InvokeAI]::ERROR --> Error while invoking session bd7dcd2c-d725-4936-947b-4f343ad12771, invocation 09dabbc3-ca5a-4f79-8454-ab12bac4d3d1 (flux_denoise): Expected query, key, and value to have the same dtype, but got query.dtype: float key.dtype: float and value.dtype: c10::BFloat16 instead.
[2024-12-03 11:57:12,518]::[InvokeAI]::ERROR --> Traceback (most recent call last):
  File "/Volumes/SSD2TB/AI/InvokeAI/lib/python3.11/site-packages/invokeai/app/services/session_processor/session_processor_default.py", line 129, in run_node
    output = invocation.invoke_internal(context=context, services=self._services)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Volumes/SSD2TB/AI/InvokeAI/lib/python3.11/site-packages/invokeai/app/invocations/baseinvocation.py", line 300, in invoke_internal
    output = self.invoke(context)
             ^^^^^^^^^^^^^^^^^^^^
  File "/Volumes/SSD2TB/AI/InvokeAI/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Volumes/SSD2TB/AI/InvokeAI/lib/python3.11/site-packages/invokeai/app/invocations/flux_denoise.py", line 138, in invoke
    latents = self._run_diffusion(context)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Volumes/SSD2TB/AI/InvokeAI/lib/python3.11/site-packages/invokeai/app/invocations/flux_denoise.py", line 334, in _run_diffusion
    x = denoise(
        ^^^^^^^^
  File "/Volumes/SSD2TB/AI/InvokeAI/lib/python3.11/site-packages/invokeai/backend/flux/denoise.py", line 73, in denoise
    pred = model(
           ^^^^^^
  File "/Volumes/SSD2TB/AI/InvokeAI/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Volumes/SSD2TB/AI/InvokeAI/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Volumes/SSD2TB/AI/InvokeAI/lib/python3.11/site-packages/invokeai/backend/flux/model.py", line 125, in forward
    img, txt = CustomDoubleStreamBlockProcessor.custom_double_block_forward(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Volumes/SSD2TB/AI/InvokeAI/lib/python3.11/site-packages/invokeai/backend/flux/custom_block_processor.py", line 78, in custom_double_block_forward
    img, txt, img_q = CustomDoubleStreamBlockProcessor._double_stream_block_forward(
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Volumes/SSD2TB/AI/InvokeAI/lib/python3.11/site-packages/invokeai/backend/flux/custom_block_processor.py", line 49, in _double_stream_block_forward
    attn = attention(q, k, v, pe=pe, attn_mask=attn_mask)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Volumes/SSD2TB/AI/InvokeAI/lib/python3.11/site-packages/invokeai/backend/flux/math.py", line 11, in attention
    x = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=attn_mask)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected query, key, and value to have the same dtype, but got query.dtype: float key.dtype: float and value.dtype: c10::BFloat16 instead.

What you expected to happen

Flux render runs to completion and produces a nice image

How to reproduce the problem

start the app, and render an image using a Flux model (GGUG, full precision doesn't matter)

Additional context

Backing out part of commit a03721d will fix it , the change it makes to apply_rope.
e.g.

def apply_rope(xq: Tensor, xk: Tensor, freqs_cis: Tensor) -> tuple[Tensor, Tensor]:
    xq_ = xq.view(*xq.shape[:-1], -1, 1, 2)
    xk_ = xk.view(*xk.shape[:-1], -1, 1, 2)
    xq_out = freqs_cis[..., 0] * xq_[..., 0] + freqs_cis[..., 1] * xq_[..., 1]
    xk_out = freqs_cis[..., 0] * xk_[..., 0] + freqs_cis[..., 1] * xk_[..., 1]
#    return xq_out.view(*xq.shape), xk_out.view(*xk.shape)
    return xq_out.view(*xq.shape).type_as(xq), xk_out.view(*xk.shape).type_as(xk)

the commented out code is the current code.

Discord username

No response

@Vargol Vargol added the bug Something isn't working label Dec 3, 2024
@Vargol Vargol changed the title [bug]: Flux broken (on MPS ?) n 5.4.3.rc2 dtype issue [bug]: Flux broken (on MPS ?) in 5.4.3.rc2, dtype issue Dec 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant