Skip to content

use LLM_KV instead of gguf_find_key #12672

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 1, 2025
Merged

use LLM_KV instead of gguf_find_key #12672

merged 1 commit into from
Apr 1, 2025

Conversation

jklincn
Copy link
Contributor

@jklincn jklincn commented Mar 31, 2025

Hi, I found this TODO while reading the llama_model_loader code, so I imported LLM_KV_GENERAL_FILE_TYPE according to gguf.py and completed it.

I ran the CI locally and everything passed successfully.

This is my second PR. Please feel free to let me know if there's anything I should improve or adjust.

@ngxson ngxson merged commit e39e727 into ggml-org:master Apr 1, 2025
48 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants