-
Notifications
You must be signed in to change notification settings - Fork 807
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Assign <unusedXX>
tokens with special_tokens
without growing vocab size
#1473
Comments
That is something we should do indeed |
Beautiful. That would mostly resolve another issue as well huggingface/trl#1412 (comment) |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
I'll take this one on in a bit! |
opened a PR ! |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
``
I'm trying to modify
google/gemma-7b
tokenizer for instruction tuning purposes. My goal is to replace some of the "unused" tokens that were specifically added to the tokenizer for my own defined "custom" tokens. I want these custom tokens to be treated as "special" (i.e. not normalized, stripped, etc.), however this seems like an impossible task.What I would like to do is some version of the following,
Given that many models/tokenizers being open-sourced specifically reserve some set of unused tokens for this purpose, I would like to make use of them without growing the vocabulary, and subsequently not having to adjust the model's embedding dimensions.
I've tried manually manipulating the vocab, and assigning appropriate dicts on the forward and reverse pass (encoder, decoder), but nothing seems to work.
How can I achieve my goals of making use of unused tokens, ensuring they are treated as "special", and not growing the vocabulary of the tokenizer and model embedding?
The text was updated successfully, but these errors were encountered: