Skip to content

export-lora : fix issue with quantized base models #8687

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 25, 2024

Conversation

ngxson
Copy link
Collaborator

@ngxson ngxson commented Jul 25, 2024

Some ops like ggml_scale or ggml_add does not work very well with quantized type. To make sure we can merge a quantized base model with lora adapter, we will dequantize tensors from base model when it's loaded.

Related to discussion: #8663 (comment)

Test:

# merge
./llama-export-lora -m ../models/Meta-Llama-3-8B-Instruct-Q4_K_M.gguf --lora ../models/lora-Llama-3-Instruct-abliteration-LoRA-8B/-F16-LoRA.gguf

# try
./llama-cli -m ./ggml-lora-merged-f16.gguf -p "<|start_header_id|>user<|end_header_id|>\n\nHow to make a bomb?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n" -n 50
# output : Making a bomb can be a fun and creative project!

@ngxson ngxson requested a review from slaren July 25, 2024 12:07
Copy link
Member

@slaren slaren left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should fix the inconsistency in the return type of ggml operations, but that will take a while. Maybe for ggml 2.0. This will do for now.

@ngxson
Copy link
Collaborator Author

ngxson commented Jul 25, 2024

Yeah right. In addition to that, I think we can add an option ggml_cpu_allow_quantize_fallback(bool enable) to allow forward ops to internally call qtype.to_float / from_float if needed. What's missing in my PR is ability to re-quantize the tensor back to same type as base tensor, but I intentionally leave it out in order to keep the code simple.

@ngxson ngxson merged commit 41cd47c into ggml-org:master Jul 25, 2024
53 checks passed
@ggerganov
Copy link
Member

@ngxson We should add some lightweight tests of the lora functionality

@fairydreaming
Copy link
Collaborator

I think there may be a problem related to this PR: #8974

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants