Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support system prompt and cfg renorm in Lumina2 #6795

Merged
merged 2 commits into from
Feb 16, 2025

Conversation

lzyhha
Copy link
Contributor

@lzyhha lzyhha commented Feb 12, 2025

We support CFG Renorm official codes and two system prompts (superior and alignment.)
Meanwhile, we modify the workflow using our recommended settings.

  1. The CFG Renorm is implemented through a new node, RenormCFG, in comfy_extras/nodes_lumina2.py.

  2. The system prompts are implemented through a new node, CLIPTextEncodeLumina2, in comfy_extras/nodes_lumina2.py. This node is used to prevent users from deleting or forgetting system prompts.

截屏2025-02-12 18 57 26

  1. We modify the workflow and you can drag the following image in ComfyUI to get it. This workflow uses the euler sampler, 50 steps, and shifting of 6.
    lumina2_cfgrenorm_systemprompt

@comfyanonymous
Copy link
Owner

Some issues with the ruff style check, see the failing test.

@lzyhha
Copy link
Contributor Author

lzyhha commented Feb 16, 2025

Thank you for checking, we have made modifications according to your comments. @comfyanonymous

The new workflow:
lumina2

@comfyanonymous comfyanonymous merged commit 61c8c70 into comfyanonymous:master Feb 16, 2025
5 checks passed
@lzyhha
Copy link
Contributor Author

lzyhha commented Feb 18, 2025

Thank you for reviewing @comfyanonymous. Another thing is whether you can update the official workflow in https://comfyanonymous.github.io/ComfyUI_examples/lumina2/ as our provided workflow.

Our workflow supports the merged functions and uses euler sampling of 50 steps and shifting 6.

@rockerBOO
Copy link

The CFG Renorm doesn't work with batch size > 1.

!!! Exception during processing !!! Boolean value of Tensor with more than one value is ambiguous
Traceback (most recent call last):
  File "/home/rockerboo/sd/ComfyUI/execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/home/rockerboo/sd/ComfyUI/execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/home/rockerboo/sd/ComfyUI/execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/home/rockerboo/sd/ComfyUI/execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "/home/rockerboo/sd/ComfyUI/nodes.py", line 1542, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
  File "/home/rockerboo/sd/ComfyUI/nodes.py", line 1509, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
  File "/home/rockerboo/sd/ComfyUI/custom_nodes/comfyui-impact-pack/modules/impact/sample_error_enhancer.py", line 22, in informative_sample
    raise e
  File "/home/rockerboo/sd/ComfyUI/custom_nodes/comfyui-impact-pack/modules/impact/sample_error_enhancer.py", line 9, in informative_sample
    return original_sample(*args, **kwargs)  # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
  File "/home/rockerboo/sd/ComfyUI/comfy/sample.py", line 45, in sample
    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "/home/rockerboo/sd/ComfyUI/comfy/samplers.py", line 1110, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
  File "/home/rockerboo/sd/ComfyUI/comfy/samplers.py", line 1000, in sample
    return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
  File "/home/rockerboo/sd/ComfyUI/comfy/samplers.py", line 985, in sample
    output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
  File "/home/rockerboo/sd/ComfyUI/comfy/patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
  File "/home/rockerboo/sd/ComfyUI/comfy/samplers.py", line 953, in outer_sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
  File "/home/rockerboo/sd/ComfyUI/comfy/samplers.py", line 936, in inner_sample
    samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
  File "/home/rockerboo/sd/ComfyUI/comfy/patcher_extension.py", line 110, in execute
    return self.original(*args, **kwargs)
  File "/home/rockerboo/sd/ComfyUI/comfy/samplers.py", line 715, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
  File "/home/rockerboo/sd/ComfyUI/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/home/rockerboo/sd/ComfyUI/comfy/k_diffusion/sampling.py", line 1333, in sample_res_multistep
    return res_multistep(model, x, sigmas, extra_args=extra_args, callback=callback, disable=disable, s_noise=s_noise, noise_sampler=noise_sampler, eta=0., cfg_pp=False)
  File "/home/rockerboo/sd/ComfyUI/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/home/rockerboo/sd/ComfyUI/comfy/k_diffusion/sampling.py", line 1292, in res_multistep
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "/home/rockerboo/sd/ComfyUI/comfy/samplers.py", line 379, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
  File "/home/rockerboo/sd/ComfyUI/comfy/samplers.py", line 916, in __call__
    return self.predict_noise(*args, **kwargs)
  File "/home/rockerboo/sd/ComfyUI/comfy/samplers.py", line 919, in predict_noise
    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
  File "/home/rockerboo/sd/ComfyUI/comfy/samplers.py", line 366, in sampling_function
    return cfg_function(model, out[0], out[1], cond_scale, x, timestep, model_options=model_options, cond=cond, uncond=uncond_)
  File "/home/rockerboo/sd/ComfyUI/comfy/samplers.py", line 339, in cfg_function
    cfg_result = x - model_options["sampler_cfg_function"](args)
  File "/home/rockerboo/sd/ComfyUI/comfy_extras/nodes_lumina2.py", line 40, in renorm_cfg_func
    if new_pos_norm >= max_new_norm:
RuntimeError: Boolean value of Tensor with more than one value is ambiguous

In a different codebase I fixed it by

                max_new_norms = cond_norm * float(renorm_cfg)
                noise_norms = torch.linalg.vector_norm(
                    noise_pred, dim=tuple(range(1, len(noise_pred.shape))), keepdim=True
                )
                # Iterate through batch
                for noise_norm, max_new_norm, noise in zip(noise_norms, max_new_norms, noise_pred):
                    if noise_norm >= max_new_norm:
                        noise = noise * (max_new_norm / noise_norm)

I could make a PR but I might not be able to for a little bit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants