Skip to content

llama : add enum for built-in chat templates #10623

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Dec 2, 2024

Conversation

ngxson
Copy link
Collaborator

@ngxson ngxson commented Dec 2, 2024

Add enum llm_chat_template to enumerate all built-in chat templates. This can help generating documentation:

--chat-template JINJA_TEMPLATE          set custom jinja chat template (default: template taken from model's
                                        metadata)
                                        if suffix/prefix are specified, template will be disabled
                                        list of built-in templates:
                                        chatglm3, chatglm4, chatml, command-r, deepseek, deepseek2, exaone3,
                                        gemma, granite, llama2, llama2-sys, llama2-sys-bos, llama2-sys-strip,
                                        llama3, minicpm, mistral-v1, mistral-v3, mistral-v3-tekken,
                                        mistral-v7, monarch, openchat, orion, phi3, rwkv-world, vicuna,
                                        vicuna-orca, zephyr
                                        (env: LLAMA_ARG_CHAT_TEMPLATE)

@github-actions github-actions bot added the testing Everything test related label Dec 2, 2024
@ngxson ngxson changed the title llama : add enum for supported chat templates llama : add enum for built-in chat templates Dec 2, 2024
@ngxson ngxson marked this pull request as ready for review December 2, 2024 12:53
@ngxson ngxson requested a review from ggerganov December 2, 2024 12:53
@ngxson ngxson merged commit 642330a into ggml-org:master Dec 2, 2024
44 checks passed
tinglou pushed a commit to tinglou/llama.cpp that referenced this pull request Dec 7, 2024
* llama : add enum for supported chat templates

* use "built-in" instead of "supported"

* arg: print list of built-in templates

* fix test

* update server README
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Dec 20, 2024
* llama : add enum for supported chat templates

* use "built-in" instead of "supported"

* arg: print list of built-in templates

* fix test

* update server README
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
examples server testing Everything test related
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants