-
Notifications
You must be signed in to change notification settings - Fork 27.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Doc] add_special_tokens
's documentation is ambigus
#22935
Comments
The Note that not all tokenizers support adding special tokens. If a tokenizer does not support adding special tokens, setting You are using the "EleutherAI/pythia-70m" tokenizer which does not have a specific token for If you want to add input_ids = tok.encode("the dog walked", add_special_tokens=False)
input_ids = [tok.bos_token_id] + input_ids + [tok.eos_token_id]
attention_mask = [1] * len(input_ids)
output = {"input_ids": input_ids, "attention_mask": attention_mask}
print(output) |
Thanks for explaining. Can this behavior be added to the docs for the transformer tokenizer class? Nowhere on the API docs does it say that |
You can also define these tokens when initialising the model or after. |
add_special_tokens
's documentation is ambigus
I am waiting until the added tokens refactoring is finish to make sure this is fixed, and update the doc! |
System Info
transformers
version: 4.28.1Who can help?
@ArthurZucker
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
outputs
Expected behavior
I expect it to output
[0, 783, 4370, 7428, 0]
. Or am I misunderstanding whatadd_special_tokens
is supposed to do?The text was updated successfully, but these errors were encountered: