Replies: 3 comments 3 replies
-
This is a bit fantastic! Did you run make? |
Beta Was this translation helpful? Give feedback.
-
Try this:
Then have look at README.md, which contains several examples about how to build with cmake for difference purposes. |
Beta Was this translation helpful? Give feedback.
-
You say there is no error, but your log file says:
Looking at the steps you're following, it looks like you built from source. In this case the executables should be in the build directory. Based on your log I can tell that you're in the directory "llama.cpp" not "llama.cpp\build\bin\Release". If you run the quantize command from the folder containing quantize.exe it should work. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I am trying to follow the guide and I can usually get all the way to
quantize the model to 4-bits (using q4_0 method)
./quantize ./models/7B/ggml-model-f16.bin ./models/7B/ggml-model-q4_0.bin q4_0
without error. but there is no ./quantize! no visual folder for me and no file found for the code!
Steps.txt
log.txt
Beta Was this translation helpful? Give feedback.
All reactions