Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Jinja template support #11016

Merged
merged 47 commits into from
Jan 21, 2025
Merged

Add Jinja template support #11016

merged 47 commits into from
Jan 21, 2025

Conversation

ochafik
Copy link
Collaborator

@ochafik ochafik commented Dec 30, 2024

Subset of #9639 with just the Jinja templating support.

Proper tool support (grammar constraints, lazy grammar triggering, tool call parsing & stop reason) will come in a follow up PR.

  • Copies minja.hpp & chat-template.hpp from google/minja (created for this 😅) at this commit
  • Adds --jinja flag to llama-server, llama-cli, llama-run
  • Adds --chat-template-file flag to llama-server, llama-cli (related: Added chat template support to llama-run #11215 )
  • Loads tokenizer.chat_template (or tokenizer.chat_template.tool_use if defined, only when the request has tools).
  • Dual testing in test-chat-template.cpp of legacy adhoc templating & jinja route. Wherever the expected outputs diverge, the jinja expectations should be more correct (note that templates are run w/ trim_blocks = true, lstrip_blocks = true)

Example usage:

# Launch in background
./build/bin/llama-server \
  -hfr bartowski/Qwen2.5-7B-Instruct-GGUF \
  -hff Qwen2.5-7B-Instruct-Q4_K_M.gguf \
  --jinja &

curl http://localhost:8080/v1/chat/completions \
  -d '{
    "model": "gpt-3.5-turbo",
    "tools": [
      {
        "type": "function",
        "function": {
          "name": "ipython",
          "description": "Runs code in an ipython interpreter and returns the result of the execution after 60 seconds.",
          "parameters": {
            "type": "object",
            "properties": {
              "code": {
                "type": "string",
                "description": "The code to run in the ipython interpreter."
              }
            },
            "required": ["code"]
          }
        }
      }
    ],
    "messages": [
      {
        "role": "user",
        "content": "Print a hello world message with python (using single quotes '"'"' for strings)."
      }
    ]
  }'
show output
{
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "message": {
        "content": "<tool_call>\n{\"name\": \"ipython\", \"arguments\": {\"code\": \"print('Hello world!')\"}}\n</tool_call>",
        "role": "assistant"
      }
    }
  ],
  "created": 1736811609,
  "model": "gpt-3.5-turbo",
  "system_fingerprint": "b4494-a57bb94e",
  "object": "chat.completion",
  "usage": {
    "completion_tokens": 25,
    "prompt_tokens": 205,
    "total_tokens": 230
  },
  "id": "chatcmpl-5YJXFVhvjoMDlLx1asuWNdSO3JVWWsUF",
  "timings": {
    "prompt_n": 1,
    "prompt_ms": 155.151,
    "prompt_per_token_ms": 155.151,
    "prompt_per_second": 6.445333900522716,
    "predicted_n": 25,
    "predicted_ms": 419.714,
    "predicted_per_token_ms": 16.78856,
    "predicted_per_second": 59.56437002339688
  }
}

TODO:

  • Add cross-testing in test-chat-template.cpp (note that minja is tested against a lot of templates in its own repo)
  • Add some instructions here
  • Add more server tests to exercise the template overrides.

@github-actions github-actions bot added script Script related examples python python script changes server labels Dec 30, 2024
@ericcurtin
Copy link
Collaborator

Feel free to add the option to llama-run for basic testing also @ochafik

@github-actions github-actions bot added the testing Everything test related label Jan 13, 2025
common/minja.hpp Outdated
Comment on lines 41 to 48
static std::string normalize_newlines(const std::string & s) {
#ifdef _WIN32
static const std::regex nl_regex("\r\n");
return std::regex_replace(s, nl_regex, "\n");
#else
return s;
#endif
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what was the original purpose for this, but I think it can be removed, as well as the definition of ENDL to \r\n in win32. It shouldn't make a difference with stringstream.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dropped ENDL + 1 usage of this function (at end of rendering; one is still needed to shield the parser from CRLFs), thanks!

examples/server/server.cpp Outdated Show resolved Hide resolved
@ngxson
Copy link
Collaborator

ngxson commented Jan 21, 2025

Small thing to note is that some jinja templates are not "linear", meaning each conversation turn is not self-contained, but can modify the content before it.

For example, the new deepseek-r1 distilled has {% set content = content.split('</think>')[-1] %} to remove the thinking process from conversation history. I also once saw a template that adds EOS token after each formatted chat, which also breaks this logic.

The consequence is that it will break common_chat_format_single (used in llama-cli) and apply_chat_template (used by llama-run) since they assume that each new message is self-contained (i.e. is addition, but not modification)

A solution is to also track the cached token at token level (not conversation level), which I introduced here #11203 , @ericcurtin feel free to port this to llama-run if you want. This approach is kinda like server implementation.

@ochafik ochafik merged commit 6171c9d into ggml-org:master Jan 21, 2025
47 checks passed
@ochafik
Copy link
Collaborator Author

ochafik commented Jan 21, 2025

Thanks everyone for the insightful reviews! More from #9639 to come soon :-)

@fairydreaming
Copy link
Collaborator

Not sure if this is a special case or the template is broken, but when I load minimax-text-01 (my work-in-progress) with the following template:

{% for message in messages %}{% if message['role'] == 'system' %}{{ '<beginning_of_sentence>system ai_setting=assistant\\n' + message['content'][0]['text'] + '<end_of_sentence>\\n'}}{% elif message['role'] == 'user' %}{{ '<beginning_of_sentence>user name=user\\n' + message['content'][0]['text'] + '<end_of_sentence>\\n'}}{% elif message['role'] == 'assistant' %}{{ '<beginning_of_sentence>ai name=assistant\\n' }}{% for content in message['content'] | selectattr('type', 'equalto', 'text') %}{% generation %}{{ content['text'] }}{% endgeneration %}{% endfor %}{{ '<end_of_sentence>\\n' }}{% endif %}{% endfor %}{% if add_generation_prompt %}{{ '<beginning_of_sentence>ai name=assistant\\n' }}{% endif %}

with this PR llama.cpp crashes during model loading:

terminate called after throwing an instance of 'std::runtime_error'
  what():  Expected block keyword at row 1, column 492:
{% for message in messages %}{% if message['role'] == 'system' %}{{ '<beginning_of_sentence>system ai_setting=assistant\n' + message['content'][0]['text'] + '<end_of_sentence>\n'}}{% elif message['role'] == 'user' %}{{ '<beginning_of_sentence>user name=user\n' + message['content'][0]['text'] + '<end_of_sentence>\n'}}{% elif message['role'] == 'assistant' %}{{ '<beginning_of_sentence>ai name=assistant\n' }}{% for content in message['content'] | selectattr('type', 'equalto', 'text') %}{% generation %}{{ content['text'] }}{% endgeneration %}{% endfor %}{{ '<end_of_sentence>\n' }}{% endif %}{% endfor %}{% if add_generation_prompt %}{{ '<beginning_of_sentence>ai name=assistant\n' }}{% endif %}
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           ^

@ochafik
Copy link
Collaborator Author

ochafik commented Jan 21, 2025

Not sure if this is a special case or the template is broken, but when I load minimax-text-01 (my work-in-progress) with the following template:

{% for message in messages %}{% if message['role'] == 'system' %}{{ '<beginning_of_sentence>system ai_setting=assistant\\n' + message['content'][0]['text'] + '<end_of_sentence>\\n'}}{% elif message['role'] == 'user' %}{{ '<beginning_of_sentence>user name=user\\n' + message['content'][0]['text'] + '<end_of_sentence>\\n'}}{% elif message['role'] == 'assistant' %}{{ '<beginning_of_sentence>ai name=assistant\\n' }}{% for content in message['content'] | selectattr('type', 'equalto', 'text') %}{% generation %}{{ content['text'] }}{% endgeneration %}{% endfor %}{{ '<end_of_sentence>\\n' }}{% endif %}{% endfor %}{% if add_generation_prompt %}{{ '<beginning_of_sentence>ai name=assistant\\n' }}{% endif %}

Hey @fairydreaming , thanks for testing & reporting! Your template contain an exotic {% generation %}...{% endgeneration %} syntax that doesn't seem supported by, say, this online jinja parser either.

terminate called after throwing an instance of 'std::runtime_error'
what(): Expected block keyword at row 1, column 492:

I could certainly make the error more informative though, feel free to file something on https://github.com/google/minja to that end (and/or any feature request).

Looking forward to testing your model, good luck with it!

@fairydreaming
Copy link
Collaborator

@ochafik I did some research and it seems to be a custom keyword introduced in HF transformers: huggingface/transformers#30650

Fortunately among all the models I have currently on disk only MiniMax-Text-01 uses this.

@ochafik
Copy link
Collaborator Author

ochafik commented Jan 22, 2025

@ochafik I did some research and it seems to be a custom keyword introduced in HF transformers: huggingface/transformers#30650

Fortunately among all the models I have currently on disk only MiniMax-Text-01 uses this.

@fairydreaming thanks for researching that, will track support in google/minja#28

@ochafik ochafik mentioned this pull request Jan 22, 2025
anagri pushed a commit to BodhiSearch/llama.cpp that referenced this pull request Jan 26, 2025
* Copy minja from google/minja@58f0ca6

* Add --jinja and --chat-template-file flags

* Add missing <optional> include

* Avoid print in get_hf_chat_template.py

* No designated initializers yet

* Try and work around msvc++ non-macro max resolution quirk

* Update test_chat_completion.py

* Wire LLM_KV_TOKENIZER_CHAT_TEMPLATE_N in llama_model_chat_template

* Refactor test-chat-template

* Test templates w/ minja

* Fix deprecation

* Add --jinja to llama-run

* Update common_chat_format_example to use minja template wrapper

* Test chat_template in e2e test

* Update utils.py

* Update test_chat_completion.py

* Update run.cpp

* Update arg.cpp

* Refactor common_chat_* functions to accept minja template + use_jinja option

* Attempt to fix linkage of LLAMA_CHATML_TEMPLATE

* Revert LLAMA_CHATML_TEMPLATE refactor

* Normalize newlines in test-chat-templates for windows tests

* Forward decl minja::chat_template to avoid eager json dep

* Flush stdout in chat template before potential crash

* Fix copy elision warning

* Rm unused optional include

* Add missing optional include to server.cpp

* Disable jinja test that has a cryptic windows failure

* minja: fix vigogne (google/minja#22)

* Apply suggestions from code review

Co-authored-by: Xuan Son Nguyen <[email protected]>
Co-authored-by: Georgi Gerganov <[email protected]>

* Finish suggested renamings

* Move chat_templates inside server_context + remove mutex

* Update --chat-template-file w/ recent change to --chat-template

* Refactor chat template validation

* Guard against missing eos/bos tokens (null token otherwise throws in llama_vocab::impl::token_get_attr)

* Warn against missing eos / bos tokens when jinja template references them

* rename: common_chat_template[s]

* reinstate assert on chat_templates.template_default

* Update minja to google/minja@b8437df

* Update minja to google/minja#25

* Update minja from google/minja#27

* rm unused optional header

---------

Co-authored-by: Xuan Son Nguyen <[email protected]>
Co-authored-by: Georgi Gerganov <[email protected]>
tinglou pushed a commit to tinglou/llama.cpp that referenced this pull request Feb 13, 2025
* Copy minja from google/minja@58f0ca6

* Add --jinja and --chat-template-file flags

* Add missing <optional> include

* Avoid print in get_hf_chat_template.py

* No designated initializers yet

* Try and work around msvc++ non-macro max resolution quirk

* Update test_chat_completion.py

* Wire LLM_KV_TOKENIZER_CHAT_TEMPLATE_N in llama_model_chat_template

* Refactor test-chat-template

* Test templates w/ minja

* Fix deprecation

* Add --jinja to llama-run

* Update common_chat_format_example to use minja template wrapper

* Test chat_template in e2e test

* Update utils.py

* Update test_chat_completion.py

* Update run.cpp

* Update arg.cpp

* Refactor common_chat_* functions to accept minja template + use_jinja option

* Attempt to fix linkage of LLAMA_CHATML_TEMPLATE

* Revert LLAMA_CHATML_TEMPLATE refactor

* Normalize newlines in test-chat-templates for windows tests

* Forward decl minja::chat_template to avoid eager json dep

* Flush stdout in chat template before potential crash

* Fix copy elision warning

* Rm unused optional include

* Add missing optional include to server.cpp

* Disable jinja test that has a cryptic windows failure

* minja: fix vigogne (google/minja#22)

* Apply suggestions from code review

Co-authored-by: Xuan Son Nguyen <[email protected]>
Co-authored-by: Georgi Gerganov <[email protected]>

* Finish suggested renamings

* Move chat_templates inside server_context + remove mutex

* Update --chat-template-file w/ recent change to --chat-template

* Refactor chat template validation

* Guard against missing eos/bos tokens (null token otherwise throws in llama_vocab::impl::token_get_attr)

* Warn against missing eos / bos tokens when jinja template references them

* rename: common_chat_template[s]

* reinstate assert on chat_templates.template_default

* Update minja to google/minja@b8437df

* Update minja to google/minja#25

* Update minja from google/minja#27

* rm unused optional header

---------

Co-authored-by: Xuan Son Nguyen <[email protected]>
Co-authored-by: Georgi Gerganov <[email protected]>
@ggerganov
Copy link
Member

@ochafik I think we should take some time to wrap the jinja / json functionality better because I am taking a bit more detailed look now and I am afraid that these large headers are proliferating too much across the examples codebase than they are supposed to.

Here is what I think needs to be changed:

  • All common_chat_ interfaces from the common/common.h header should be moved to common/chat.h
  • common/common.cpp should stop including json.hpp and the common_chat_ implementations there should be moved to common/chat.cpp. There is some curl-related functionality in common/common.cpp that still requires json.hpp, so this can stay for a while but ideally it should also stop using json.hpp at some point.
  • common/chat.h should not include json.hpp. We have to be really careful with this header and not allow it to spread across the source files. The exception is the server example where the json functionality is already at its core and cannot be fixed anymore. But for example, main.cpp should not need to include json.hpp directly. With the change in this PR, it now includes common/chat-template.hpp which brings all the jinja/json stuff. Instead, this should be wrapped and accessed only through common/chat.h.
  • The minja sources (i.e. common/minja.hpp and common/chat-template.hpp) should only be included in common/chat.cpp and nowhere else. The minja sources could be moved to a separate folder common/minja and it can be included for things like test-chat.cpp, but it should not be included by any of the other sources.

@ochafik
Copy link
Collaborator Author

ochafik commented Feb 15, 2025

Here is what I think needs to be changed:

  • All common_chat_ interfaces from the common/common.h header should be moved to common/chat.h
  • common/common.cpp should stop including json.hpp and the common_chat_ implementations there should be moved to common/chat.cpp. There is some curl-related functionality in common/common.cpp that still requires json.hpp, so this can stay for a while but ideally it should also stop using json.hpp at some point.
  • common/chat.h should not include json.hpp. We have to be really careful with this header and not allow it to spread across the source files. The exception is the server example where the json functionality is already at its core and cannot be fixed anymore. But for example, main.cpp should not need to include json.hpp directly. With the change in this PR, it now includes common/chat-template.hpp which brings all the jinja/json stuff. Instead, this should be wrapped and accessed only through common/chat.h.

@ggerganov Thanks! I think this works great if we start passing tools & json_schema as JSON strings (slight inefficiency to dump in server then parse again in chat, but hopefully negligible cost - will try to measure it). Preparing a cleanup.

(cc/ @bandoti, heads up re/ #11556: big internal changes / cleanup looming ahead that should make it easier to wire into the cli)

  • The minja sources (i.e. common/minja.hpp and common/chat-template.hpp) should only be included in common/chat.cpp and nowhere else. The minja sources could be moved to a separate folder common/minja and it can be included for things like test-chat.cpp, but it should not be included by any of the other sources.

👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
examples python python script changes script Script related server testing Everything test related
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants