-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SD3 Inference] T5 Token limit #8506
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py
Show resolved
Hide resolved
the failing test is not from this PR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks!
src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3_img2img.py
Outdated
Show resolved
Hide resolved
src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3_img2img.py
Outdated
Show resolved
Hide resolved
src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py
Show resolved
Hide resolved
src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the prompt PR. I left some questions. Overall this looks good to me. My only concern is that we're introducing a change that might lead to different results for the same prompt.
@asomoza can we add a section to the doc (in a new PR)? |
* max_sequence_length for the T5 * updated img2img * apply suggestions --------- Co-authored-by: Sayak Paul <[email protected]> Co-authored-by: YiYi Xu <[email protected]>
* max_sequence_length for the T5 * updated img2img * apply suggestions --------- Co-authored-by: Sayak Paul <[email protected]> Co-authored-by: YiYi Xu <[email protected]>
What does this PR do?
Adds an argument
max_sequence_length
to set the token limit for the T5.Prompt:
I did a quick test with enabling long prompts for the clip models but it didn't make any noticeable difference, so for now this PR will only enable the long prompt for the T5 to avoid adding more code and complexity to the pipeline.
Who can review?
Anyone in the community is free to review