-
Notifications
You must be signed in to change notification settings - Fork 193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some details about the implementations #150
Comments
In Stage 1, we trained on the non-accelerated model (sdxl-base) using traditional diffusion loss. For Stage 2 and 3, although in the first implementation all losses were calculated on the lightning branch, we believe that it is also okay to place the diffusion loss on the non-accelerated model, and it will also help with compatibility. |
Thanks for your quick responses! |
I am training pulid like non-acclerated model for diffusion loss and acclerated model for id loss, and a weird problem arises: the generated image includes many faces of the same person. how can i resolve it? thx |
Hi! Thanks for your extraordinary work! I have a question about the training. The paper mentioned that "We introduce a Lightning T2I branch alongside the regular diffusion branch." But in the method section, all the loss calculation are conducted in the lighting T2I branch. I wonder to know, in the training, is it enough to only need a lighting model (SDXL Lighting) instead of using the original base model (SDXL)? I mean the original base model does not need any trainable parameters. In this way, where does the "alongside" reflect?
The text was updated successfully, but these errors were encountered: