Skip to content

merge lora.adapters into a base model? confused about the bin file part. #8663

Answered by ngxson
markat1 asked this question in Q&A
Discussion options

You must be logged in to vote

Sorry the guide has a typo error. The lora must be always gguf:

./bin/llama-export-lora \
    -m open-llama-3b-v2-q8_0.gguf \
    -o open-llama-3b-v2-q8_0-english2tokipona-chat.gguf \
    --lora lora-open-llama-3b-v2-q8_0-english2tokipona-chat-LATEST.gguf

Multiple LORA adapters can be applied by passing multiple --lora FNAME or --lora-scaled FNAME S command line parameters:

./bin/llama-export-lora \
    -m your_base_model.gguf \
    -o your_merged_model.gguf \
    --lora-scaled lora_task_A.gguf 0.5 \
    --lora-scaled lora_task_B.gguf 0.5

It's fixed in #8669

Replies: 3 comments 5 replies

Comment options

You must be logged in to vote
0 replies
Answer selected by markat1
Comment options

You must be logged in to vote
2 replies
@ngxson
Comment options

ngxson Jul 24, 2024
Collaborator

@markat1
Comment options

Comment options

You must be logged in to vote
3 replies
@ngxson
Comment options

ngxson Jul 25, 2024
Collaborator

@markat1
Comment options

@ngxson
Comment options

ngxson Jul 25, 2024
Collaborator

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants