-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Take hidden states from last non-padding token when batching #38
Conversation
weird! i thought the tokenizers were left-padding by default... ah, mistral does... >>> llama3_tokenizer(["x", "x x"], padding=True)
{'input_ids': [[128000, 87, 128001], [128000, 87, 865]], 'attention_mask': [[1, 1, 0], [1, 1, 1]]}
>>> mistral_tokenizer(["x", "x x"], padding=True)
{'input_ids': [[2, 1, 1318], [1, 1318, 1318]], 'attention_mask': [[0, 1, 1], [1, 1, 1]]} |
Oh huh… maybe an easier fix would be to force the tokenizer to always left pad |
yeah i was thinking that, but i think your approach is better because the user might want right-padding for whatever reason--better to not mess with their tokenizer instance if we can avoid it. |
I think it is checked already… |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks so much for catching this! Will have to retry all my llama-3 generations now... :-)
Glad I checked the PRs too, was just about to cut the 0.3 release so you just squeaked in! |
First of all, this is a really neat repo!
I noticed that
batched_get_hiddens
always takes hidden states from the last token in each sequence in a batch. Since the sequences are padded to the same length, this means that batching affects the hidden states for all but the longest sequence in each batch.After this change, there's still some difference between the batched and non-batched hidden states, but I think that might be due to the model itself since batching changes the order of operations: huggingface/transformers#23017 (comment)
I've only tried this on llama-3-8b, I'm not sure if it will need changes to work on other models.