We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent f578b86 commit a55eb1bCopy full SHA for a55eb1b
README.md
@@ -622,9 +622,6 @@ python3 -m pip install -r requirements.txt
622
# convert the model to ggml FP16 format
623
python3 convert-hf-to-gguf.py models/mymodel/
624
625
-# [Optional] for models using BPE tokenizers
626
-python convert-hf-to-gguf.py models/mymodel/ --vocab-type bpe
627
-
628
# quantize the model to 4-bits (using Q4_K_M method)
629
./llama-quantize ./models/mymodel/ggml-model-f16.gguf ./models/mymodel/ggml-model-Q4_K_M.gguf Q4_K_M
630
0 commit comments