-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flux Control Lora not unloaded correctly #10202
Comments
I think this is expected. If you're doing a We expand the So, we'd want to call |
I tried with |
Yeah looking into that now. |
@christopher5106 #10206 should probably solve the problem. |
yes, it does work, when using unload_lora_weights before switch |
That is expected for the reasons I mentioned: Do you think this should be documented? @yiyixuxu any thoughts? |
@sayakpaul
|
It's a null-op here I think. It comes from copying the Flux conversion script which also has this:
|
@sayakpaul I thought it was solved. I don't know why but no more, something has change. Please reopen |
What is the problem? |
Unloading flux control lora does not work anymore. import torch
from controlnet_aux import CannyDetector
from diffusers import FluxControlPipeline
from diffusers.utils import load_image
from diffusers import FluxImg2ImgPipeline
model = "black-forest-labs/FLUX.1-dev"
pipe = FluxControlPipeline.from_pretrained(
model,
torch_dtype=torch.bfloat16
).to("cuda")
pipe.load_lora_weights(
"black-forest-labs/FLUX.1-Canny-dev-lora"
)
prompt = "A robot made of exotic candies and chocolates of different kinds. The background is filled with confetti and celebratory gifts."
control_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robot.png")
processor = CannyDetector()
control_image = processor(control_image, low_threshold=50, high_threshold=200, detect_resolution=1024, image_resolution=1024)
image = pipe(
prompt=prompt,
control_image=control_image,
num_inference_steps=50,
guidance_scale=30.0,
).images[0]
pipe.unload_lora_weights()
pipe = FluxImg2ImgPipeline.from_pipe(
pipe,
torch_dtype=torch.bfloat16
)
pipe = pipe.to("cuda")
init_image = load_image("https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg").resize((1024, 1024))
prompt = "cat wizard, gandalf, lord of the rings, detailed, fantasy, cute, adorable, Pixar, Disney, 8k"
image = pipe(
prompt=prompt,
image=init_image,
num_inference_steps=28,
strength=0.5,
guidance_scale=2.5
).images[0]
|
When you asked me last month, it was working, in particular I double check now, unloading works well with commit 1b202c5 but no more with recent version of diffusers |
Could you follow the guidelines from https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux#note-about-unloadloraweights-when-using-flux-loras and check again? |
There was an introduction of a parameter reset_to_overwritten_params=True inbetween my validation... So with this parameter it now works well. |
Describe the bug
Hi,
There is a bug while switching pipeline from Flux dev when lora control has been loaded:
Reproduction
Remplacing the
from_pipe
loading by standard loading shows the previous code should workLogs
No response
System Info
Ubuntu
Who can help?
@yiyixuxu @sayakpaul @DN6 @asomoza
The text was updated successfully, but these errors were encountered: