We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 37e257c commit 9dda13eCopy full SHA for 9dda13e
examples/server/README.md
@@ -16,6 +16,10 @@ This example allow you to have a llama.cpp http server to interact from a web pa
16
To get started right away, run the following command, making sure to use the correct path for the model you have:
17
18
#### Unix-based systems (Linux, macOS, etc.):
19
+Make sure to build with the server option on
20
+```bash
21
+LLAMA_BUILD_SERVER=1 make
22
+```
23
24
```bash
25
./server -m models/7B/ggml-model.bin --ctx_size 2048
0 commit comments