-
Notifications
You must be signed in to change notification settings - Fork 487
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added support for Tapas Model #520
Added support for Tapas Model #520
Conversation
Do you mind if I ask you whether I am on the right track? |
…hub.com/JuheonChu/optimum into Add-Bettertransformer-support-for-Tapas
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For me it looks like you did everything well!
Left a few nits and comments.
@younesbelkada will be back on Wednesday, so we should wait for his approval, but great work!
@@ -23,6 +23,8 @@ | |||
ViTLayerBetterTransformer, | |||
Wav2Vec2EncoderLayerBetterTransformer, | |||
WhisperEncoderLayerBetterTransformer, | |||
TapasLayerBetterTransformer, | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The documentation is not available anymore as the PR was closed or merged. |
test_better_encoder Co-authored-by: Michael Benayoun <[email protected]>
Co-authored-by: Michael Benayoun <[email protected]>
Thanks for your suggestion and feedback! I reformatted with black, and can I proceed with the CircleCI test one more time to check if my reformatting file works? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your contribution! @younesbelkada good to merge if you are satisfied.
Co-authored-by: Sylvain Gugger <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Waiting for the tests to run but it looks good to me!
Apparently a test is not passing: https://github.com/huggingface/optimum/actions/runs/3566116329/jobs/5992236636 You can run them with Can you run as well |
…hub.com/JuheonChu/optimum into Add-Bettertransformer-support-for-Tapas
@fxmarty I applied |
It seems to me that module packages in tests/bettertransformer/test_bettertransformer_encoder.py are importing all fail. However, I would not like to change the structure of a file system based on the suggestion from here. May I ask you for any suggestions? Thank you for your time. |
Co-authored-by: fxmarty <[email protected]>
Call super() in the _init() to inherit from BetterTransformerBaseLayer Co-authored-by: fxmarty <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks so much for your efforts @JuheonChu !
I left few comments, I think that the conversion is currently failing because the keys that you are trying to retrieve (for eg tapas_layer.attention.query.weight
does not match, here you should use tapas_layer.attention.self.query.weight
instead. Check the detailed modeling file here. )
One way to make sure that your conversion worked is to constantly run this snippet and debug further 💪 :
from transformers import AutoModel
from optimum.bettertransformer import BetterTransformer
model_id = "hf-internal-testing/tiny-random-TapasModel"
model = AutoModel.from_pretrained(model_id)
bt_model = BetterTransformer.transform(model)
Thanks and let us know if you face into any issue!
@@ -49,6 +49,7 @@ | |||
ALL_ENCODER_DECODER_MODELS_TO_TEST = [ | |||
"hf-internal-testing/tiny-random-FSMTModel", | |||
"hf-internal-testing/tiny-random-BartModel", | |||
"hf-internal-testing/tiny-random-TapasModel", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you move this to ALL_ENCODER_MODELS_TO_TEST
instead? Tapas
is an encoder-only model ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reading https://huggingface.co/docs/transformers/v4.24.0/en/model_doc/tapas#transformers.TapasModel
The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in Attention is all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I see! Thanks for an insightful explanation! I just moved it! :)
self.in_proj_weight = nn.Parameter( | ||
torch.cat( | ||
[ | ||
tapas_layer.attention.query.weight, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looking at the Tapas implementation, https://github.com/huggingface/transformers/blob/28247e78819ab9756b81f8df39611c333d099400/src/transformers/models/tapas/modeling_tapas.py#L442 , I think we need here and in the rest tapas_layer.attention.self.query.weight
, that is why the test is failing!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh........ thanks you for providing me with your insights!
Actually, looking at https://github.com/huggingface/transformers/blob/28247e78819ab9756b81f8df39611c333d099400/src/transformers/models/tapas/modeling_tapas.py#L539 I think the What do you think @younesbelkada ? |
Co-authored-by: Younes Belkada <[email protected]>
yes probably, thanks for the heads up @fxmarty ! |
Just made changes accordingly! @younesbelkada Thank you so much! I will try the conversions and let you know! |
@@ -19,6 +19,7 @@ | |||
BertLayerBetterTransformer, | |||
DistilBertLayerBetterTransformer, | |||
FSMTEncoderLayerBetterTransformer, | |||
TapasLayerBetterTransformer, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be removed
Seems all good now @JuheonChu ! Thank you for this PR! So in the end using |
Thank you so much, everyone!! @fxmarty @younesbelkada @sgugger @michaelbenayoun. |
@@ -17,7 +17,6 @@ | |||
from .base import BetterTransformerBaseLayer | |||
|
|||
|
|||
class TapasLayerBetterTransformer(BetterTransformerBaseLayer): | |||
def __init__(self, tapas_layer, config): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant to remove the class TapasLayerBetterTransformer
in full 😅 It is not needed in the end since BertLayerBetterTransformer
can handle Tapas.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I removed class TapasLayerBetterTransformer
. Is there anything to be adjusted in this file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On your branch, with optimum dev installed (pip install -e .
in the optimum top folder), can you try to run the script @younesbelkada provided you and see what happens?
from transformers import AutoModel
from optimum.bettertransformer import BetterTransformer
model_id = "hf-internal-testing/tiny-random-TapasModel"
model = AutoModel.from_pretrained(model_id)
bt_model = BetterTransformer.transform(model)
Hey @JuheonChu ! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot @JuheonChu for adding Tapas
support for BetterTransformer
! Great job!
Looking forward to your next contributions 💪
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
@younesbelkada May I ask you where I can check the merged BetterTransformer feature of Tapas model? |
What does this PR do?
This PR adds BetterTransformerAPI for Tapas model.
Fixes #20372
Before submitting
Questions
To: @younesbelkada, @sgugger