Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Gemma] Fix eager attention #29187

Merged
merged 6 commits into from
Feb 22, 2024

Conversation

sanchit-gandhi
Copy link
Contributor

@sanchit-gandhi sanchit-gandhi commented Feb 21, 2024

What does this PR do?

Fixes the Gemma "eager" attention implementation, which is the default for torch versions <= 2.1. This issue was reported on the Hub discussions and by @osanseviero from the model cards/blog post.

The PR also includes a set of slow tests to ensure:

  1. Logits equivalence between {eager, sdpa, flash attention 2}
  2. Expected generation results for {eager, sdpa, flash attention 2}

=> these tests confirm all attention implementations work and have equivalence between back-ends

@@ -276,7 +276,7 @@ def forward(

attn_output = attn_output.transpose(1, 2).contiguous()

attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
attn_output = attn_output.view(bsz, q_len, -1)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the only modelling code change required - the remainder of the changes in this PR are logit + integration tests

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@ArthurZucker
Copy link
Collaborator

Thanks for the catch. A real pity our 280 prior test did not catch this! 🤗

@ArthurZucker ArthurZucker merged commit 2a9b1f8 into huggingface:main Feb 22, 2024
18 checks passed
ArthurZucker pushed a commit that referenced this pull request Feb 22, 2024
* fix modelling code

* add tests

* fix tests

* add some logit tests

* style

* fix fix
@sanchit-gandhi
Copy link
Contributor Author

sanchit-gandhi commented Feb 22, 2024

I'll look at integrating 2 tests in test_modeling_common.py that can automatically test for eager vs sdpa and eager vs fa2. Currently, this is done on an ad-hoc basis for each model (e.g. Whisper here).

@ArthurZucker
Copy link
Collaborator

It's also that it tests dummy models, but here the head_dim != hidden / head! So a case no studied

@fxmarty
Copy link
Contributor

fxmarty commented Feb 22, 2024

@sanchit-gandhi Not true, see

def test_eager_matches_sdpa_inference(self, torch_dtype: str):
. It is just a slow test. For FA2 vs eager it's possible there is not global test though, did not check that

@ArthurZucker
Copy link
Collaborator

eager vs sdpa passed 😉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants