-
Notifications
You must be signed in to change notification settings - Fork 12.2k
Added --chat-template-file to llama-run #11961
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added --chat-template-file to llama-run #11961
Conversation
Preceding PR: #11922 And still this error even though there is no merge commit:
|
193eb87
to
d1b57cf
Compare
Relates to: ggml-org#11178 Added --chat-template-file CLI option to llama-run. If specified, the file will be read and the content passed for overwriting the chat template of the model to common_chat_templates_from_model. Signed-off-by: Michael Engel <[email protected]>
d1b57cf
to
530ba31
Compare
if(!chat_template_file.empty()){ | ||
chat_template = read_chat_template_file(chat_template_file); | ||
} | ||
auto chat_templates = common_chat_templates_init(llama_data.model.get(), chat_template.empty() ? nullptr : chat_template); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should do:
chat_template.empty() ? "" : chat_template
here. Passing nullptr to a reference is not allowed. I wish the compiler caught these things.
common_chat_templates_ptr common_chat_templates_init(
const struct llama_model * model,
const std::string & chat_template_override,
const std::string & bos_token_override = "",
const std::string & eos_token_override = "")
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe the std::string class is smart enough to interpret all these as the same thing:
"", '', 0, NULL, nullptr
and that's why it compiles/works 🤷 . So it might be just implicitly converting it to "".
Relates to: ggml-org#11178 Added --chat-template-file CLI option to llama-run. If specified, the file will be read and the content passed for overwriting the chat template of the model to common_chat_templates_from_model. Signed-off-by: Michael Engel <[email protected]>
Relates to: ggml-org#11178 Added --chat-template-file CLI option to llama-run. If specified, the file will be read and the content passed for overwriting the chat template of the model to common_chat_templates_from_model. Signed-off-by: Michael Engel <[email protected]>
Relates to: ggml-org#11178 Added --chat-template-file CLI option to llama-run. If specified, the file will be read and the content passed for overwriting the chat template of the model to common_chat_templates_from_model. Signed-off-by: Michael Engel <[email protected]>
Relates to: ggml-org#11178 Added --chat-template-file CLI option to llama-run. If specified, the file will be read and the content passed for overwriting the chat template of the model to common_chat_templates_from_model. Signed-off-by: Michael Engel <[email protected]>
Relates to: #11178
Added --chat-template-file CLI option to llama-run. If specified, the file will be read and the content passed for overwriting the chat template of the model to common_chat_templates_from_model.
This also enables running the
granite-code
model from ollama:Make sure to read the contributing guidelines before submitting a PR