diff --git a/docs/source/en/chat_templating.md b/docs/source/en/chat_templating.md
index d840caaf6605..c4069dd1afc7 100644
--- a/docs/source/en/chat_templating.md
+++ b/docs/source/en/chat_templating.md
@@ -580,7 +580,7 @@ default template for that model class is used instead. Let's take a look at the
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
->>> tokenizer.default_chat_template
+>>> tokenizer.chat_template
"{% for message in messages %}{% if message['role'] == 'user' %}{{ ' ' }}{% endif %}{{ message['content'] }}{% if not loop.last %}{{ ' ' }}{% endif %}{% endfor %}{{ eos_token }}"
```
@@ -704,23 +704,6 @@ with other names, pass the name of the template you want to the `chat_template`
We find that this can be a bit confusing for users, though - so if you're writing a template yourself, we recommend
trying to put it all in a single template where possible!
-### What are "default" templates?
-
-Before the introduction of chat templates, chat handling was hardcoded at the model class level. For backwards
-compatibility, we have retained this class-specific handling as default templates, also set at the class level. If a
-model does not have a chat template set, but there is a default template for its model class, the `TextGenerationPipeline`
-class and methods like `apply_chat_template` will use the class template instead. You can find out what the default
-template for your tokenizer is by checking the `tokenizer.default_chat_template` attribute.
-
-This is something we do purely for backward compatibility reasons, to avoid breaking any existing workflows. Even when
-the class template is appropriate for your model, we strongly recommend overriding the default template by
-setting the `chat_template` attribute explicitly to make it clear to users that your model has been correctly configured
-for chat.
-
-Now that actual chat templates have been adopted more widely, default templates have been deprecated and will be
-removed in a future release. We strongly recommend setting the `chat_template` attribute for any tokenizers that
-still depend on them!
-
### What template should I use?
When setting the template for a model that's already been trained for chat, you should ensure that the template
diff --git a/docs/source/es/chat_templating.md b/docs/source/es/chat_templating.md
index 10129e87ef11..e287c2137435 100644
--- a/docs/source/es/chat_templating.md
+++ b/docs/source/es/chat_templating.md
@@ -220,7 +220,7 @@ La plantilla de chat para un modelo se almacena en el atributo `tokenizer.chat_t
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
->>> tokenizer.default_chat_template
+>>> tokenizer.chat_template
"{% for message in messages %}{% if message['role'] == 'user' %}{{ ' ' }}{% endif %}{{ message['content'] }}{% if not loop.last %}{{ ' ' }}{% endif %}{% endfor %}{{ eos_token }}"
```
@@ -307,12 +307,6 @@ Si estás ajustando finamente un modelo para chat, además de establecer una pla
-### ¿Qué son las plantillas "default"?
-
-Antes de la introducción de las plantillas de chat, el manejo del chat estaba codificado en el nivel de la clase del modelo. Por razones de compatibilidad con versiones anteriores, hemos conservado este manejo específico de la clase como plantillas predeterminadas, también establecidas a nivel de clase. Si un modelo no tiene una plantilla de chat establecida, pero hay una plantilla predeterminada para su clase de modelo, la clase `TextGenerationPipeline` y métodos como `apply_chat_template` usarán la plantilla de clase en su lugar. Puedes averiguar cuál es la plantilla predeterminada para tu tokenizador comprobando el atributo `tokenizer.default_chat_template`.
-
-Esto es algo que hacemos puramente por razones de compatibilidad con versiones anteriores, para evitar romper cualquier flujo de trabajo existente. Incluso cuando la plantilla de clase es apropiada para tu modelo, recomendamos encarecidamente anular la plantilla predeterminada estableciendo explícitamente el atributo `chat_template` para dejar claro a los usuarios que tu modelo ha sido configurado correctamente para el chat, y para estar preparados para el futuro en caso de que las plantillas predeterminadas alguna vez se alteren o se eliminen.
-
### ¿Qué plantilla debería usar?
Cuando establezcas la plantilla para un modelo que ya ha sido entrenado para chat, debes asegurarte de que la plantilla coincida exactamente con el formato de mensajes que el modelo vio durante el entrenamiento, o de lo contrario es probable que experimentes degradación del rendimiento. Esto es cierto incluso si estás entrenando aún más el modelo; probablemente obtendrás el mejor rendimiento si mantienes constantes los tokens de chat. Esto es muy análogo a la tokenización: generalmente obtienes el mejor rendimiento para la inferencia o el ajuste fino cuando coincides precisamente con la tokenización utilizada durante el entrenamiento.
diff --git a/docs/source/ja/chat_templating.md b/docs/source/ja/chat_templating.md
index 200bf40ac4cf..82db942ef1e1 100644
--- a/docs/source/ja/chat_templating.md
+++ b/docs/source/ja/chat_templating.md
@@ -85,7 +85,7 @@ LLM(Language Model)のますます一般的な使用事例の1つは「チ
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
->>> tokenizer.default_chat_template
+>>> tokenizer.chat_template
"{% for message in messages %}{% if message['role'] == 'user' %}{{ ' ' }}{% endif %}{{ message['content'] }}{% if not loop.last %}{{ ' ' }}{% endif %}{% endfor %}{{ eos_token }}"
```
diff --git a/docs/source/zh/chat_templating.md b/docs/source/zh/chat_templating.md
index a08da47cb27a..e0ab50b634c7 100644
--- a/docs/source/zh/chat_templating.md
+++ b/docs/source/zh/chat_templating.md
@@ -228,7 +228,7 @@ The sun.
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill")
->>> tokenizer.default_chat_template
+>>> tokenizer.chat_template
"{% for message in messages %}{% if message['role'] == 'user' %}{{ ' ' }}{% endif %}{{ message['content'] }}{% if not loop.last %}{{ ' ' }}{% endif %}{% endfor %}{{ eos_token }}"
```
diff --git a/src/transformers/models/blenderbot/tokenization_blenderbot.py b/src/transformers/models/blenderbot/tokenization_blenderbot.py
index 677245382334..1a8807214d52 100644
--- a/src/transformers/models/blenderbot/tokenization_blenderbot.py
+++ b/src/transformers/models/blenderbot/tokenization_blenderbot.py
@@ -405,17 +405,3 @@ def build_inputs_with_special_tokens(self, token_ids_0: List[int], token_ids_1:
`List[int]`: list of [input IDs](../glossary#input-ids) with the appropriate special tokens.
"""
return token_ids_0 + [self.eos_token_id]
-
- @property
- def default_chat_template(self):
- """
- A very simple chat template that just adds whitespace between messages.
- """
- return (
- "{% for message in messages %}"
- "{% if message['role'] == 'user' %}{{ ' ' }}{% endif %}"
- "{{ message['content'] }}"
- "{% if not loop.last %}{{ ' ' }}{% endif %}"
- "{% endfor %}"
- "{{ eos_token }}"
- )
diff --git a/src/transformers/models/blenderbot/tokenization_blenderbot_fast.py b/src/transformers/models/blenderbot/tokenization_blenderbot_fast.py
index 01cbf13809d6..0d24ed62c574 100644
--- a/src/transformers/models/blenderbot/tokenization_blenderbot_fast.py
+++ b/src/transformers/models/blenderbot/tokenization_blenderbot_fast.py
@@ -287,18 +287,3 @@ def build_inputs_with_special_tokens(self, token_ids_0: List[int], token_ids_1:
`List[int]`: list of [input IDs](../glossary#input-ids) with the appropriate special tokens.
"""
return token_ids_0 + [self.eos_token_id]
-
- @property
- # Copied from transformers.models.blenderbot.tokenization_blenderbot.BlenderbotTokenizer.default_chat_template
- def default_chat_template(self):
- """
- A very simple chat template that just adds whitespace between messages.
- """
- return (
- "{% for message in messages %}"
- "{% if message['role'] == 'user' %}{{ ' ' }}{% endif %}"
- "{{ message['content'] }}"
- "{% if not loop.last %}{{ ' ' }}{% endif %}"
- "{% endfor %}"
- "{{ eos_token }}"
- )
diff --git a/src/transformers/models/blenderbot_small/tokenization_blenderbot_small.py b/src/transformers/models/blenderbot_small/tokenization_blenderbot_small.py
index 832b5315edfd..08c7be332e31 100644
--- a/src/transformers/models/blenderbot_small/tokenization_blenderbot_small.py
+++ b/src/transformers/models/blenderbot_small/tokenization_blenderbot_small.py
@@ -217,18 +217,3 @@ def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] =
index += 1
return vocab_file, merge_file
-
- @property
- # Copied from transformers.models.blenderbot.tokenization_blenderbot.BlenderbotTokenizer.default_chat_template
- def default_chat_template(self):
- """
- A very simple chat template that just adds whitespace between messages.
- """
- return (
- "{% for message in messages %}"
- "{% if message['role'] == 'user' %}{{ ' ' }}{% endif %}"
- "{{ message['content'] }}"
- "{% if not loop.last %}{{ ' ' }}{% endif %}"
- "{% endfor %}"
- "{{ eos_token }}"
- )
diff --git a/src/transformers/models/blenderbot_small/tokenization_blenderbot_small_fast.py b/src/transformers/models/blenderbot_small/tokenization_blenderbot_small_fast.py
index a80acdb650e4..21fb76cbfc86 100644
--- a/src/transformers/models/blenderbot_small/tokenization_blenderbot_small_fast.py
+++ b/src/transformers/models/blenderbot_small/tokenization_blenderbot_small_fast.py
@@ -98,18 +98,3 @@ def create_token_type_ids_from_sequences(
if token_ids_1 is None:
return len(cls + token_ids_0 + sep) * [0]
return len(cls + token_ids_0 + sep + sep + token_ids_1 + sep) * [0]
-
- @property
- # Copied from transformers.models.blenderbot.tokenization_blenderbot.BlenderbotTokenizer.default_chat_template
- def default_chat_template(self):
- """
- A very simple chat template that just adds whitespace between messages.
- """
- return (
- "{% for message in messages %}"
- "{% if message['role'] == 'user' %}{{ ' ' }}{% endif %}"
- "{{ message['content'] }}"
- "{% if not loop.last %}{{ ' ' }}{% endif %}"
- "{% endfor %}"
- "{{ eos_token }}"
- )
diff --git a/src/transformers/models/bloom/tokenization_bloom_fast.py b/src/transformers/models/bloom/tokenization_bloom_fast.py
index d0da1621d4c9..54e637735308 100644
--- a/src/transformers/models/bloom/tokenization_bloom_fast.py
+++ b/src/transformers/models/bloom/tokenization_bloom_fast.py
@@ -147,11 +147,3 @@ def _encode_plus(self, *args, **kwargs) -> BatchEncoding:
def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
files = self._tokenizer.model.save(save_directory, name=filename_prefix)
return tuple(files)
-
- @property
- # Copied from transformers.models.gpt2.tokenization_gpt2.GPT2Tokenizer.default_chat_template
- def default_chat_template(self):
- """
- A simple chat template that ignores role information and just concatenates messages with EOS tokens.
- """
- return "{% for message in messages %}" "{{ message.content }}{{ eos_token }}" "{% endfor %}"
diff --git a/src/transformers/models/code_llama/tokenization_code_llama.py b/src/transformers/models/code_llama/tokenization_code_llama.py
index 5bbf2d0452f4..cc906687874c 100644
--- a/src/transformers/models/code_llama/tokenization_code_llama.py
+++ b/src/transformers/models/code_llama/tokenization_code_llama.py
@@ -437,61 +437,6 @@ def create_token_type_ids_from_sequences(
return output
- @property
- # Copied from transformers.models.llama.tokenization_llama.LlamaTokenizer.default_chat_template
- def default_chat_template(self):
- """
- LLaMA uses [INST] and [/INST] to indicate user messages, and <> and <> to indicate system messages.
- Assistant messages do not have special tokens, because LLaMA chat models are generally trained with strict
- user/assistant/user/assistant message ordering, and so assistant messages can be identified from the ordering
- rather than needing special tokens. The system message is partly 'embedded' in the first user message, which
- results in an unusual token ordering when it is present. This template should definitely be changed if you wish
- to fine-tune a model with more flexible role ordering!
-
- The output should look something like:
-
- [INST] B_SYS SystemPrompt E_SYS Prompt [/INST] Answer [INST] Prompt [/INST] Answer
- [INST] Prompt [/INST]
-
- The reference for this chat template is [this code
- snippet](https://github.com/facebookresearch/llama/blob/556949fdfb72da27c2f4a40b7f0e4cf0b8153a28/llama/generation.py#L320-L362)
- in the original repository.
- """
- template = (
- "{% if messages[0]['role'] == 'system' %}"
- "{% set loop_messages = messages[1:] %}" # Extract system message if it's present
- "{% set system_message = messages[0]['content'] %}"
- "{% elif USE_DEFAULT_PROMPT == true and not '<>' in messages[0]['content'] %}"
- "{% set loop_messages = messages %}" # Or use the default system message if the flag is set
- "{% set system_message = 'DEFAULT_SYSTEM_MESSAGE' %}"
- "{% else %}"
- "{% set loop_messages = messages %}"
- "{% set system_message = false %}"
- "{% endif %}"
- "{% for message in loop_messages %}" # Loop over all non-system messages
- "{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}"
- "{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}"
- "{% endif %}"
- "{% if loop.index0 == 0 and system_message != false %}" # Embed system message in first message
- "{% set content = '<>\\n' + system_message + '\\n<>\\n\\n' + message['content'] %}"
- "{% else %}"
- "{% set content = message['content'] %}"
- "{% endif %}"
- "{% if message['role'] == 'user' %}" # After all of that, handle messages/roles in a fairly normal way
- "{{ bos_token + '[INST] ' + content.strip() + ' [/INST]' }}"
- "{% elif message['role'] == 'system' %}"
- "{{ '<>\\n' + content.strip() + '\\n<>\\n\\n' }}"
- "{% elif message['role'] == 'assistant' %}"
- "{{ ' ' + content.strip() + ' ' + eos_token }}"
- "{% endif %}"
- "{% endfor %}"
- )
- template = template.replace("USE_DEFAULT_PROMPT", "true" if self.use_default_system_prompt else "false")
- default_message = DEFAULT_SYSTEM_PROMPT.replace("\n", "\\n").replace("'", "\\'")
- template = template.replace("DEFAULT_SYSTEM_MESSAGE", default_message)
-
- return template
-
def __getstate__(self):
state = self.__dict__.copy()
state["sp_model"] = None
diff --git a/src/transformers/models/code_llama/tokenization_code_llama_fast.py b/src/transformers/models/code_llama/tokenization_code_llama_fast.py
index 9bdb7a65b584..b832348d07af 100644
--- a/src/transformers/models/code_llama/tokenization_code_llama_fast.py
+++ b/src/transformers/models/code_llama/tokenization_code_llama_fast.py
@@ -349,61 +349,6 @@ def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] =
return (out_vocab_file,)
- @property
- # Copied from transformers.models.llama.tokenization_llama.LlamaTokenizer.default_chat_template
- def default_chat_template(self):
- """
- LLaMA uses [INST] and [/INST] to indicate user messages, and <> and <> to indicate system messages.
- Assistant messages do not have special tokens, because LLaMA chat models are generally trained with strict
- user/assistant/user/assistant message ordering, and so assistant messages can be identified from the ordering
- rather than needing special tokens. The system message is partly 'embedded' in the first user message, which
- results in an unusual token ordering when it is present. This template should definitely be changed if you wish
- to fine-tune a model with more flexible role ordering!
-
- The output should look something like:
-
- [INST] B_SYS SystemPrompt E_SYS Prompt [/INST] Answer [INST] Prompt [/INST] Answer
- [INST] Prompt [/INST]
-
- The reference for this chat template is [this code
- snippet](https://github.com/facebookresearch/llama/blob/556949fdfb72da27c2f4a40b7f0e4cf0b8153a28/llama/generation.py#L320-L362)
- in the original repository.
- """
- template = (
- "{% if messages[0]['role'] == 'system' %}"
- "{% set loop_messages = messages[1:] %}" # Extract system message if it's present
- "{% set system_message = messages[0]['content'] %}"
- "{% elif USE_DEFAULT_PROMPT == true and not '<>' in messages[0]['content'] %}"
- "{% set loop_messages = messages %}" # Or use the default system message if the flag is set
- "{% set system_message = 'DEFAULT_SYSTEM_MESSAGE' %}"
- "{% else %}"
- "{% set loop_messages = messages %}"
- "{% set system_message = false %}"
- "{% endif %}"
- "{% for message in loop_messages %}" # Loop over all non-system messages
- "{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}"
- "{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}"
- "{% endif %}"
- "{% if loop.index0 == 0 and system_message != false %}" # Embed system message in first message
- "{% set content = '<>\\n' + system_message + '\\n<>\\n\\n' + message['content'] %}"
- "{% else %}"
- "{% set content = message['content'] %}"
- "{% endif %}"
- "{% if message['role'] == 'user' %}" # After all of that, handle messages/roles in a fairly normal way
- "{{ bos_token + '[INST] ' + content.strip() + ' [/INST]' }}"
- "{% elif message['role'] == 'system' %}"
- "{{ '<>\\n' + content.strip() + '\\n<>\\n\\n' }}"
- "{% elif message['role'] == 'assistant' %}"
- "{{ ' ' + content.strip() + ' ' + eos_token }}"
- "{% endif %}"
- "{% endfor %}"
- )
- template = template.replace("USE_DEFAULT_PROMPT", "true" if self.use_default_system_prompt else "false")
- default_message = DEFAULT_SYSTEM_PROMPT.replace("\n", "\\n").replace("'", "\\'")
- template = template.replace("DEFAULT_SYSTEM_MESSAGE", default_message)
-
- return template
-
def build_inputs_with_special_tokens(
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
) -> List[int]:
diff --git a/src/transformers/models/cohere/tokenization_cohere_fast.py b/src/transformers/models/cohere/tokenization_cohere_fast.py
index b0a62e279ca8..bac665b473c5 100644
--- a/src/transformers/models/cohere/tokenization_cohere_fast.py
+++ b/src/transformers/models/cohere/tokenization_cohere_fast.py
@@ -228,188 +228,6 @@ def add_bos_token(self, value):
self._add_bos_token = value
self.update_post_processor()
- @property
- def default_chat_template(self):
- """
- Cohere Tokenizer uses <|START_OF_TURN_TOKEN|> and <|END_OF_TURN_TOKEN|> to indicate each turn in a chat.
- Additioanlly, to indicate the source of the message, <|USER_TOKEN|>, <|CHATBOT_TOKEN|> and <|SYSTEM_TOKEN|>
- for user, assitant and system messages respectively.
-
- The output should look something like:
- <|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>{{ preamble }}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|USER_TOKEN|>{{ How are you? }}<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>{{ I am doing well! }}<|END_OF_TURN_TOKEN|>
-
- Use add_generation_prompt to add a prompt for the model to generate a response:
- >>> from transformers import AutoTokenizer
- >>> tokenizer = AutoTokenizer.from_pretrained("CohereForAI/c4ai-command-r-v01")
- >>> messages = [{"role": "user", "content": "Hello, how are you?"}]
- >>> tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
- '<|START_OF_TURN_TOKEN|><|USER_TOKEN|>Hello, how are you?<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>'
-
- """
- default_template = (
- "{{ bos_token }}"
- "{% if messages[0]['role'] == 'system' %}"
- "{% set loop_messages = messages[1:] %}" # Extract system message if it's present
- "{% set system_message = messages[0]['content'] %}"
- "{% elif USE_DEFAULT_PROMPT == true %}"
- "{% set loop_messages = messages %}" # Or use the default system message if the flag is set
- "{% set system_message = 'DEFAULT_SYSTEM_MESSAGE' %}"
- "{% else %}"
- "{% set loop_messages = messages %}"
- "{% set system_message = false %}"
- "{% endif %}"
- "{% if system_message != false %}" # Start with system message
- "{{ '<|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>' + system_message + '<|END_OF_TURN_TOKEN|>' }}"
- "{% endif %}"
- "{% for message in loop_messages %}" # Loop over all non-system messages
- "{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}"
- "{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}"
- "{% endif %}"
- "{% set content = message['content'] %}"
- "{% if message['role'] == 'user' %}" # After all of that, handle messages/roles in a fairly normal way
- "{{ '<|START_OF_TURN_TOKEN|><|USER_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }}"
- "{% elif message['role'] == 'assistant' %}"
- "{{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }}"
- "{% endif %}"
- "{% endfor %}"
- "{% if add_generation_prompt %}"
- "{{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>' }}"
- "{% endif %}"
- )
- default_template = default_template.replace(
- "USE_DEFAULT_PROMPT", "true" if self.use_default_system_prompt else "false"
- )
- default_message = DEFAULT_SYSTEM_PROMPT.replace("\n", "\\n").replace("'", "\\'")
- default_template = default_template.replace("DEFAULT_SYSTEM_MESSAGE", default_message)
-
- tool_use_template = (
- "{{ bos_token }}"
- "{% if messages[0]['role'] == 'system' %}"
- "{% set loop_messages = messages[1:] %}" # Extract system message if it's present
- "{% set system_message = messages[0]['content'] %}"
- "{% else %}"
- "{% set loop_messages = messages %}"
- "{% set system_message = 'DEFAULT_SYSTEM_MESSAGE' %}"
- "{% endif %}"
- "{{ '<|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>' }}"
- "{{ '# Safety Preamble' }}"
- "{{ '\nThe instructions in this section override those in the task description and style guide sections. Don\\'t answer questions that are harmful or immoral.' }}"
- "{{ '\n\n# System Preamble' }}"
- "{{ '\n## Basic Rules' }}"
- "{{ '\nYou are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user\\'s requests, you cite your sources in your answers, according to those instructions.' }}"
- "{{ '\n\n# User Preamble' }}"
- "{{ '\n' + system_message }}"
- "{{'\n\n## Available Tools\nHere is a list of tools that you have available to you:\n\n'}}"
- "{% for tool in tools %}"
- "{% if loop.index0 != 0 %}"
- "{{ '\n\n'}}"
- "{% endif %}"
- "{{'```python\ndef ' + tool.name + '('}}"
- "{% for param_name, param_fields in tool.parameter_definitions.items() %}"
- "{% if loop.index0 != 0 %}"
- "{{ ', '}}"
- "{% endif %}"
- "{{param_name}}: "
- "{% if not param_fields.required %}"
- "{{'Optional[' + param_fields.type + '] = None'}}"
- "{% else %}"
- "{{ param_fields.type }}"
- "{% endif %}"
- "{% endfor %}"
- '{{ \') -> List[Dict]:\n """\'}}'
- "{{ tool.description }}"
- "{% if tool.parameter_definitions|length != 0 %}"
- "{{ '\n\n Args:\n '}}"
- "{% for param_name, param_fields in tool.parameter_definitions.items() %}"
- "{% if loop.index0 != 0 %}"
- "{{ '\n ' }}"
- "{% endif %}"
- "{{ param_name + ' ('}}"
- "{% if not param_fields.required %}"
- "{{'Optional[' + param_fields.type + ']'}}"
- "{% else %}"
- "{{ param_fields.type }}"
- "{% endif %}"
- "{{ '): ' + param_fields.description }}"
- "{% endfor %}"
- "{% endif %}"
- '{{ \'\n """\n pass\n```\' }}'
- "{% endfor %}"
- "{{ '<|END_OF_TURN_TOKEN|>'}}"
- "{% for message in loop_messages %}"
- "{% set content = message['content'] %}"
- "{% if message['role'] == 'user' %}"
- "{{ '<|START_OF_TURN_TOKEN|><|USER_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }}"
- "{% elif message['role'] == 'system' %}"
- "{{ '<|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }}"
- "{% elif message['role'] == 'assistant' %}"
- "{{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }}"
- "{% endif %}"
- "{% endfor %}"
- "{{'<|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Write \\'Action:\\' followed by a json-formatted list of actions that you want to perform in order to produce a good response to the user\\'s last input. You can use any of the supplied tools any number of times, but you should aim to execute the minimum number of necessary actions for the input. You should use the `directly-answer` tool if calling the other tools is unnecessary. The list of actions you want to call should be formatted as a list of json objects, for example:\n```json\n[\n {\n \"tool_name\": title of the tool in the specification,\n \"parameters\": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters\n }\n]```<|END_OF_TURN_TOKEN|>'}}"
- "{% if add_generation_prompt %}"
- "{{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>' }}"
- "{% endif %}"
- )
- default_tool_message = DEFAULT_RAG_PREAMBLE.replace("\n", "\\n").replace("'", "\\'")
- tool_use_template = tool_use_template.replace("DEFAULT_SYSTEM_MESSAGE", default_tool_message)
-
- rag_template = (
- "{{ bos_token }}"
- "{% if messages[0]['role'] == 'system' %}"
- "{% set loop_messages = messages[1:] %}" # Extract system message if it's present
- "{% set system_message = messages[0]['content'] %}"
- "{% else %}"
- "{% set loop_messages = messages %}"
- "{% set system_message = 'DEFAULT_SYSTEM_MESSAGE' %}"
- "{% endif %}"
- "{{ '<|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>' }}"
- "{{ '# Safety Preamble' }}"
- "{{ '\nThe instructions in this section override those in the task description and style guide sections. Don\\'t answer questions that are harmful or immoral.' }}"
- "{{ '\n\n# System Preamble' }}"
- "{{ '\n## Basic Rules' }}"
- "{{ '\nYou are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user\\'s requests, you cite your sources in your answers, according to those instructions.' }}"
- "{{ '\n\n# User Preamble' }}"
- "{{ '\n' + system_message }}"
- "{{ '<|END_OF_TURN_TOKEN|>'}}"
- "{% for message in loop_messages %}" # Loop over all non-system messages
- "{% set content = message['content'] %}"
- "{% if message['role'] == 'user' %}" # After all of that, handle messages/roles in a fairly normal way
- "{{ '<|START_OF_TURN_TOKEN|><|USER_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }}"
- "{% elif message['role'] == 'system' %}"
- "{{ '<|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }}"
- "{% elif message['role'] == 'assistant' %}"
- "{{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }}"
- "{% endif %}"
- "{% endfor %}"
- "{{ '<|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>'}}"
- "{{ '' }}"
- "{% for document in documents %}" # Loop over all non-system messages
- "{{ '\nDocument: ' }}"
- "{{ loop.index0 }}\n"
- "{% for key, value in document.items() %}"
- "{{ key }}: {{value}}\n"
- "{% endfor %}"
- "{% endfor %}"
- "{{ ''}}"
- "{{ '<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>' }}"
- "{{ 'Carefully perform the following instructions, in order, starting each with a new line.\n' }}"
- "{{ 'Firstly, Decide which of the retrieved documents are relevant to the user\\'s last input by writing \\'Relevant Documents:\\' followed by comma-separated list of document numbers. If none are relevant, you should instead write \\'None\\'.\n' }}"
- "{{ 'Secondly, Decide which of the retrieved documents contain facts that should be cited in a good answer to the user\\'s last input by writing \\'Cited Documents:\\' followed a comma-separated list of document numbers. If you dont want to cite any of them, you should instead write \\'None\\'.\n' }}"
- "{% if citation_mode=='accurate' %}"
- "{{ 'Thirdly, Write \\'Answer:\\' followed by a response to the user\\'s last input in high quality natural english. Use the retrieved documents to help you. Do not insert any citations or grounding markup.\n' }}"
- "{% endif %}"
- "{{ 'Finally, Write \\'Grounded answer:\\' followed by a response to the user\\'s last input in high quality natural english. Use the symbols and to indicate when a fact comes from a document in the search result, e.g my fact for a fact from document 0.' }}"
- "{{ '<|END_OF_TURN_TOKEN|>' }}"
- "{% if add_generation_prompt %}"
- "{{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>' }}"
- "{% endif %}"
- )
- default_rag_message = DEFAULT_RAG_PREAMBLE.replace("\n", "\\n").replace("'", "\\'")
- rag_template = rag_template.replace("DEFAULT_SYSTEM_MESSAGE", default_rag_message)
-
- return {"default": default_template, "tool_use": tool_use_template, "rag": rag_template}
-
def apply_tool_use_template(
self,
conversation: Union[List[Dict[str, str]]],
diff --git a/src/transformers/models/deprecated/gptsan_japanese/tokenization_gptsan_japanese.py b/src/transformers/models/deprecated/gptsan_japanese/tokenization_gptsan_japanese.py
index 51789e49b2d2..782f68bf921e 100644
--- a/src/transformers/models/deprecated/gptsan_japanese/tokenization_gptsan_japanese.py
+++ b/src/transformers/models/deprecated/gptsan_japanese/tokenization_gptsan_japanese.py
@@ -236,19 +236,6 @@ def convert_tokens_to_string(self, tokens):
text = "".join(words)
return text
- @property
- def default_chat_template(self):
- """
- A simple chat template that adds standard BOS, SEP and EOS tokens between messages while discarding role
- information.
- """
- return (
- "{% for message in messages %}"
- "{% if not loop.first %}{{ bos_token}}{% endif %}"
- "{{ sep_token }}{{ message.content }} {{ eos_token }}"
- "{% endfor %}"
- )
-
def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
index = 0
if os.path.isdir(save_directory):
diff --git a/src/transformers/models/gpt2/tokenization_gpt2.py b/src/transformers/models/gpt2/tokenization_gpt2.py
index 9bca559d9ea0..badacf6dbe71 100644
--- a/src/transformers/models/gpt2/tokenization_gpt2.py
+++ b/src/transformers/models/gpt2/tokenization_gpt2.py
@@ -329,10 +329,3 @@ def prepare_for_tokenization(self, text, is_split_into_words=False, **kwargs):
if is_split_into_words or add_prefix_space:
text = " " + text
return (text, kwargs)
-
- @property
- def default_chat_template(self):
- """
- A simple chat template that ignores role information and just concatenates messages with EOS tokens.
- """
- return "{% for message in messages %}" "{{ message.content }}{{ eos_token }}" "{% endfor %}"
diff --git a/src/transformers/models/gpt2/tokenization_gpt2_fast.py b/src/transformers/models/gpt2/tokenization_gpt2_fast.py
index e6747119f422..90e83f0d35a3 100644
--- a/src/transformers/models/gpt2/tokenization_gpt2_fast.py
+++ b/src/transformers/models/gpt2/tokenization_gpt2_fast.py
@@ -139,12 +139,3 @@ def _encode_plus(self, *args, **kwargs) -> BatchEncoding:
def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
files = self._tokenizer.model.save(save_directory, name=filename_prefix)
return tuple(files)
-
- @property
- # Copied from transformers.models.gpt2.tokenization_gpt2.GPT2Tokenizer.default_chat_template
- def default_chat_template(self):
- """
- A simple chat template that ignores role information and just concatenates messages with EOS tokens.
- """
-
- return "{% for message in messages %}" "{{ message.content }}{{ eos_token }}" "{% endfor %}"
diff --git a/src/transformers/models/gpt_neox/tokenization_gpt_neox_fast.py b/src/transformers/models/gpt_neox/tokenization_gpt_neox_fast.py
index 2504fa3cc051..c79e6d9ada15 100644
--- a/src/transformers/models/gpt_neox/tokenization_gpt_neox_fast.py
+++ b/src/transformers/models/gpt_neox/tokenization_gpt_neox_fast.py
@@ -228,11 +228,3 @@ def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
files = self._tokenizer.model.save(save_directory, name=filename_prefix)
return tuple(files)
-
- @property
- # Copied from transformers.models.gpt2.tokenization_gpt2.GPT2Tokenizer.default_chat_template
- def default_chat_template(self):
- """
- A simple chat template that ignores role information and just concatenates messages with EOS tokens.
- """
- return "{% for message in messages %}" "{{ message.content }}{{ eos_token }}" "{% endfor %}"
diff --git a/src/transformers/models/gpt_neox_japanese/tokenization_gpt_neox_japanese.py b/src/transformers/models/gpt_neox_japanese/tokenization_gpt_neox_japanese.py
index f36f7e3fd610..ea7f3959c78d 100644
--- a/src/transformers/models/gpt_neox_japanese/tokenization_gpt_neox_japanese.py
+++ b/src/transformers/models/gpt_neox_japanese/tokenization_gpt_neox_japanese.py
@@ -161,18 +161,6 @@ def convert_tokens_to_string(self, tokens):
out_string = "".join(tokens).strip()
return out_string
- @property
- def default_chat_template(self):
- """
- A simple chat template that just adds BOS/EOS tokens around messages while discarding role information.
- """
- return (
- "{% for message in messages %}"
- "{{ bos_token + eos_token + message.content + eos_token }}"
- "{% endfor %}"
- "{% if add_generation_prompt %} {{ bos_token + eos_token }} {% endif %}"
- )
-
def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
index = 0
if os.path.isdir(save_directory):
diff --git a/src/transformers/models/gpt_sw3/tokenization_gpt_sw3.py b/src/transformers/models/gpt_sw3/tokenization_gpt_sw3.py
index 1000bfd1b6c8..262aeaba5eea 100644
--- a/src/transformers/models/gpt_sw3/tokenization_gpt_sw3.py
+++ b/src/transformers/models/gpt_sw3/tokenization_gpt_sw3.py
@@ -294,19 +294,3 @@ def decode_fast(self, token_ids: Union[int, List[int]]) -> str:
"""
return self.sp_model.decode(token_ids)
-
- @property
- def default_chat_template(self):
- """
- This chat template formats messages like an instant messenger chat log, with "User:" and "Bot:" strings
- preceding messages. BOS tokens are added between all messages.
- """
- return (
- "{{ eos_token }}{{ bos_token }}"
- "{% for message in messages %}"
- "{% if message['role'] == 'user' %}{{ 'User: ' + message['content']}}"
- "{% else %}{{ 'Bot: ' + message['content']}}{% endif %}"
- "{{ message['text'] }}{{ bos_token }}"
- "{% endfor %}"
- "Bot:"
- )
diff --git a/src/transformers/models/idefics2/processing_idefics2.py b/src/transformers/models/idefics2/processing_idefics2.py
index c665ba74d06a..2e14118144ba 100644
--- a/src/transformers/models/idefics2/processing_idefics2.py
+++ b/src/transformers/models/idefics2/processing_idefics2.py
@@ -251,60 +251,3 @@ def model_input_names(self):
tokenizer_input_names = self.tokenizer.model_input_names
image_processor_input_names = self.image_processor.model_input_names
return list(dict.fromkeys(tokenizer_input_names + image_processor_input_names))
-
- @property
- def default_chat_template(self):
- """
- This template formats inputs in the form of a chat history. For each message in the chat history:
- * the template will output the role of the speaker followed by the content of the message.
- * content can be a single string or a list of strings and images.
- * If the content element is an image, the template will output a sequence of tokens and token before and after each image
- * The template will output an token at the end of each message.
-
- Example:
-
- ```python
- messages = [{
- "role": "user",
- "content": [
- {"type": "text", "text": "What’s in this image?"},
- {"type": "image"},
- {"type": "image"},
- ],
- },
- {
- "role": "assistant",
- "content": [{"type": "text", "text": "This picture depicts Idefix, the dog of Obelix in Asterix and Obelix. Idefix is running on the ground."},]
- }]
- ```
-
- Will create outputs like:
- ```
- User: What is in this Image?
- Assistant: This picture depicts Idefix, the dog of Obelix in Asterix and Obelix. Idefix is running on the ground.
- ```
- """
- # fmt: off
- return (
- "{% for message in messages %}"
- "{{message['role'].capitalize()}}"
- "{% if message['content'][0]['type'] == 'image' %}"
- "{{':'}}"
- "{% else %}"
- "{{': '}}"
- "{% endif %}"
- "{% for line in message['content'] %}"
- "{% if line['type'] == 'text' %}"
- "{{line['text']}}"
- "{% elif line['type'] == 'image' %}"
- "{{ '' }}"
- "{% endif %}"
- "{% endfor %}"
- "\n"
- "{% endfor %}"
-
- "{% if add_generation_prompt %}"
- "{{ 'Assistant:' }}"
- "{% endif %}"
- )
- # fmt: on
diff --git a/src/transformers/models/llama/tokenization_llama.py b/src/transformers/models/llama/tokenization_llama.py
index 80865ba98d6d..385ad2d88e10 100644
--- a/src/transformers/models/llama/tokenization_llama.py
+++ b/src/transformers/models/llama/tokenization_llama.py
@@ -411,57 +411,3 @@ def create_token_type_ids_from_sequences(
output += [1] * len(bos_token_id + token_ids_1 + eos_token_id)
return output
-
- @property
- def default_chat_template(self):
- """
- LLaMA uses [INST] and [/INST] to indicate user messages, and <> and <> to indicate system messages.
- Assistant messages do not have special tokens, because LLaMA chat models are generally trained with strict
- user/assistant/user/assistant message ordering, and so assistant messages can be identified from the ordering
- rather than needing special tokens. The system message is partly 'embedded' in the first user message, which
- results in an unusual token ordering when it is present. This template should definitely be changed if you wish
- to fine-tune a model with more flexible role ordering!
-
- The output should look something like:
-
- [INST] B_SYS SystemPrompt E_SYS Prompt [/INST] Answer [INST] Prompt [/INST] Answer
- [INST] Prompt [/INST]
-
- The reference for this chat template is [this code
- snippet](https://github.com/facebookresearch/llama/blob/556949fdfb72da27c2f4a40b7f0e4cf0b8153a28/llama/generation.py#L320-L362)
- in the original repository.
- """
- template = (
- "{% if messages[0]['role'] == 'system' %}"
- "{% set loop_messages = messages[1:] %}" # Extract system message if it's present
- "{% set system_message = messages[0]['content'] %}"
- "{% elif USE_DEFAULT_PROMPT == true and not '<>' in messages[0]['content'] %}"
- "{% set loop_messages = messages %}" # Or use the default system message if the flag is set
- "{% set system_message = 'DEFAULT_SYSTEM_MESSAGE' %}"
- "{% else %}"
- "{% set loop_messages = messages %}"
- "{% set system_message = false %}"
- "{% endif %}"
- "{% for message in loop_messages %}" # Loop over all non-system messages
- "{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}"
- "{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}"
- "{% endif %}"
- "{% if loop.index0 == 0 and system_message != false %}" # Embed system message in first message
- "{% set content = '<>\\n' + system_message + '\\n<>\\n\\n' + message['content'] %}"
- "{% else %}"
- "{% set content = message['content'] %}"
- "{% endif %}"
- "{% if message['role'] == 'user' %}" # After all of that, handle messages/roles in a fairly normal way
- "{{ bos_token + '[INST] ' + content.strip() + ' [/INST]' }}"
- "{% elif message['role'] == 'system' %}"
- "{{ '<>\\n' + content.strip() + '\\n<>\\n\\n' }}"
- "{% elif message['role'] == 'assistant' %}"
- "{{ ' ' + content.strip() + ' ' + eos_token }}"
- "{% endif %}"
- "{% endfor %}"
- )
- template = template.replace("USE_DEFAULT_PROMPT", "true" if self.use_default_system_prompt else "false")
- default_message = DEFAULT_SYSTEM_PROMPT.replace("\n", "\\n").replace("'", "\\'")
- template = template.replace("DEFAULT_SYSTEM_MESSAGE", default_message)
-
- return template
diff --git a/src/transformers/models/llama/tokenization_llama_fast.py b/src/transformers/models/llama/tokenization_llama_fast.py
index 91d3bf361517..67e339b4290a 100644
--- a/src/transformers/models/llama/tokenization_llama_fast.py
+++ b/src/transformers/models/llama/tokenization_llama_fast.py
@@ -241,61 +241,6 @@ def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] =
return (out_vocab_file,)
- @property
- # Copied from transformers.models.llama.tokenization_llama.LlamaTokenizer.default_chat_template
- def default_chat_template(self):
- """
- LLaMA uses [INST] and [/INST] to indicate user messages, and <> and <> to indicate system messages.
- Assistant messages do not have special tokens, because LLaMA chat models are generally trained with strict
- user/assistant/user/assistant message ordering, and so assistant messages can be identified from the ordering
- rather than needing special tokens. The system message is partly 'embedded' in the first user message, which
- results in an unusual token ordering when it is present. This template should definitely be changed if you wish
- to fine-tune a model with more flexible role ordering!
-
- The output should look something like:
-
- [INST] B_SYS SystemPrompt E_SYS Prompt [/INST] Answer [INST] Prompt [/INST] Answer
- [INST] Prompt [/INST]
-
- The reference for this chat template is [this code
- snippet](https://github.com/facebookresearch/llama/blob/556949fdfb72da27c2f4a40b7f0e4cf0b8153a28/llama/generation.py#L320-L362)
- in the original repository.
- """
- template = (
- "{% if messages[0]['role'] == 'system' %}"
- "{% set loop_messages = messages[1:] %}" # Extract system message if it's present
- "{% set system_message = messages[0]['content'] %}"
- "{% elif USE_DEFAULT_PROMPT == true and not '<>' in messages[0]['content'] %}"
- "{% set loop_messages = messages %}" # Or use the default system message if the flag is set
- "{% set system_message = 'DEFAULT_SYSTEM_MESSAGE' %}"
- "{% else %}"
- "{% set loop_messages = messages %}"
- "{% set system_message = false %}"
- "{% endif %}"
- "{% for message in loop_messages %}" # Loop over all non-system messages
- "{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}"
- "{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}"
- "{% endif %}"
- "{% if loop.index0 == 0 and system_message != false %}" # Embed system message in first message
- "{% set content = '<>\\n' + system_message + '\\n<>\\n\\n' + message['content'] %}"
- "{% else %}"
- "{% set content = message['content'] %}"
- "{% endif %}"
- "{% if message['role'] == 'user' %}" # After all of that, handle messages/roles in a fairly normal way
- "{{ bos_token + '[INST] ' + content.strip() + ' [/INST]' }}"
- "{% elif message['role'] == 'system' %}"
- "{{ '<>\\n' + content.strip() + '\\n<>\\n\\n' }}"
- "{% elif message['role'] == 'assistant' %}"
- "{{ ' ' + content.strip() + ' ' + eos_token }}"
- "{% endif %}"
- "{% endfor %}"
- )
- template = template.replace("USE_DEFAULT_PROMPT", "true" if self.use_default_system_prompt else "false")
- default_message = DEFAULT_SYSTEM_PROMPT.replace("\n", "\\n").replace("'", "\\'")
- template = template.replace("DEFAULT_SYSTEM_MESSAGE", default_message)
-
- return template
-
# TODO ArthurZ let's rely on the template processor instead, refactor all fast tokenizers
# Copied from transformers.models.llama.tokenization_llama.LlamaTokenizer.build_inputs_with_special_tokens
def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
diff --git a/src/transformers/models/llava_next_video/processing_llava_next_video.py b/src/transformers/models/llava_next_video/processing_llava_next_video.py
index 81426b3a0af3..6b5e86ab4149 100644
--- a/src/transformers/models/llava_next_video/processing_llava_next_video.py
+++ b/src/transformers/models/llava_next_video/processing_llava_next_video.py
@@ -159,63 +159,3 @@ def model_input_names(self):
tokenizer_input_names = self.tokenizer.model_input_names
image_processor_input_names = self.image_processor.model_input_names
return list(dict.fromkeys(tokenizer_input_names + image_processor_input_names))
-
- @property
- def default_chat_template(self):
- """
- This default vicuna template formats inputs in the form of a chat history. For each message in the chat history:
- * the template will output the role of the speaker followed by the content of the message.
- * content is a list of strings and images.
- * If the content element is an image, the template will output a sequence of or