Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow to force to use full precision when computing attention #187

Closed
Warvito opened this issue Jan 14, 2023 · 1 comment · Fixed by #189
Closed

Allow to force to use full precision when computing attention #187

Warvito opened this issue Jan 14, 2023 · 1 comment · Fixed by #189
Assignees

Comments

@Warvito
Copy link
Collaborator

Warvito commented Jan 14, 2023

As mentioned in Stable Diffusion v 2.1 (https://github.com/Stability-AI/stablediffusion/blame/main/README.md#L15), half precision in the attention computation might cause some instabilities. One thing that we could implement in our models to avoid this is the option to perform the attention with full attention or not

@Warvito Warvito linked a pull request Jan 14, 2023 that will close this issue
@Warvito Warvito self-assigned this Jan 16, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant