-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Schedulers] Add SGMUniform #9416
Comments
@ighoshsubho thanks for picking this up, go ahead when you have time. Isn't that issue resolved though? We have the inpainting and img2img pipelines merged by now. I found that you probably mean #9402. |
my bad ya its #9402 |
SGM Uniform implementation in WebUI, same in Forge In pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing", prediction_type="sample" if num_inference_steps==1 else "epsilon") Reproduction and comparison: from diffusers import EulerDiscreteScheduler
import torch
beta_start = 0.00085
beta_end = 0.012
num_train_timesteps = 1000
betas = (
torch.linspace(
beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32
)
** 2
)
alphas = 1.0 - betas
alphas_cumprod = torch.cumprod(alphas, dim=0)
# not flipped, contrary to diffusers
sigmas = ((1 - alphas_cumprod) / alphas_cumprod) ** 0.5
log_sigmas = sigmas.log()
discard_next_to_last_sigma = False
def sgm_uniform(n: int, sigma_min: float, sigma_max: float):
start = sigma_to_t(torch.tensor(sigma_max))
end = sigma_to_t(torch.tensor(sigma_min))
sigs = [t_to_sigma(ts) for ts in torch.linspace(start, end, n)[:-1]]
sigs += [0.0]
return torch.FloatTensor(sigs)
def sigma_to_t(sigma: torch.Tensor):
log_sigma = sigma.log()
dists = log_sigma - log_sigmas[:, None]
low_idx = dists.ge(0).cumsum(dim=0).argmax(dim=0).clamp(max=log_sigmas.shape[0] - 2)
high_idx = low_idx + 1
low, high = log_sigmas[low_idx], log_sigmas[high_idx]
w = (low - log_sigma) / (low - high)
w = w.clamp(0, 1)
t = (1 - w) * low_idx + w * high_idx
return t.view(sigma.shape)
def t_to_sigma(t: torch.Tensor):
t = t.float()
low_idx, high_idx, w = t.floor().long(), t.ceil().long(), t.frac()
log_sigma = (1 - w) * log_sigmas[low_idx] + w * log_sigmas[high_idx]
return log_sigma.exp()
m_sigma_min, m_sigma_max = (sigmas[0].item(), sigmas[-1].item())
num_inference_steps = 4
scheduler: EulerDiscreteScheduler = EulerDiscreteScheduler.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
subfolder="scheduler",
timestep_spacing="trailing",
prediction_type="sample" if num_inference_steps == 1 else "epsilon",
)
scheduler.set_timesteps(num_inference_steps=num_inference_steps)
print(scheduler.sigmas)
# >>> tensor([14.6146, 4.0817, 1.6129, 0.6932, 0.0000])
sgm_uniform_sigmas = sgm_uniform(
n=num_inference_steps + (1 if not discard_next_to_last_sigma else 0),
sigma_min=m_sigma_min,
sigma_max=m_sigma_max,
)
print(sgm_uniform_sigmas)
# >>> tensor([14.6146, 4.0861, 1.6156, 0.6952, 0.0000])
tensor([14.6146, 4.0817, 1.6129, 0.6932, 0.0000]) WebUI euler with sgm_uniform: tensor([14.6146, 4.0861, 1.6156, 0.6952, 0.0000]) The tiny differences are likely due to tl;dr SGM Uniform is already supported in Diffusers with |
@hlky thanks a lot for your findings, I haven't had the time to investigate the samplers + schedulers in the webuis so I'm really grateful you did this. @rollingcookies does this resolve your issue then? you just need to change the cc: @yiyixuxu for awareness. |
@asomoza If there's interest in supporting the other schedulers from the webuis I can look into existing equivalents and implement them as required. Also for this case the documentation could be updated. |
yes, if the schedulers are good and popular we can add them but we need to evaluate them first. For example there was an issue opened about the But we're very open to suggestions and opinions about other schedulers/samplers. I agree about the documentation, ccing @stevhliu to know what would be the best way add it. |
I've done a preliminary popularity check with one of my datasets where the {
"Karras": 329096,
"Automatic": 148458,
"Simple": 56595,
"Exponential": 51861,
"SGM Uniform": 21927,
"Beta": 8815,
"Polyexponential": 6807,
"Align Your Steps": 5119,
"Uniform": 4342,
"Normal": 2772,
"DDIM": 1583,
"KL Optimal": 1500,
"Turbo": 465,
"Align Your Steps 32": 429,
"Align Your Steps GITS": 169,
"DPM++ 3M SDE Karras": 88,
} Total:
I'll check other datasets as the metadata in that one is limited although I imagine the distribution will be roughly the same. |
@asomoza Thank you so much for your efforts and help, I have forwarded this information to the folks at Invoke, hopefully this will be enough to add SGMUniform Scheduler to Invoke |
@rollingcookies no problem but this was all thanks to @hlky Wish you luck, I don't see why they wouldn't add it since it's a really small change. |
@hlky about these:
Any of these could work if we don't have them right now, I've seen good generations with these. |
Thanks for thinking about the docs! 🤗 Would you like to open a PR to add SGMUniform to the table? |
Thanks to @rollingcookies, we can see in this issue that this schedulers works great with the Hyper and probably also Lighting loras/unets.
It'd be fantastic if someone can contribute this scheduler to diffusers.
Please let me know if someone is willing to do this.
The text was updated successfully, but these errors were encountered: