You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In order to build llama.cpp you have four different options.
10
+
The following sections describe how to build with different backends and options.
11
+
12
+
## CPU-only Build
11
13
12
14
- Using `CMake`:
13
15
@@ -47,22 +49,10 @@ In order to build llama.cpp you have four different options.
47
49
```
48
50
Note: Building for arm64 could also be done just with MSVC (with the build-arm64-windows-MSVC preset, or the standard CMake build instructions). But MSVC does not support inline ARM assembly-code, used e.g. for the accelerated Q4_0_4_8 CPU kernels.
49
51
50
-
- Using `gmake` (FreeBSD):
51
-
52
-
1. Install and activate [DRM in FreeBSD](https://wiki.freebsd.org/Graphics)
On MacOS, Metal is enabled by default. Using Metal makes the computation run on the GPU.
65
-
To disable the Metal build at compile time use the `GGML_NO_METAL=1` flag or the `GGML_METAL=OFF` cmake option.
55
+
To disable the Metal build at compile time use the `-DGGML_METAL=OFF` cmake option.
66
56
67
57
When built with Metal support, you can explicitly disable GPU inference with the `--n-gpu-layers|-ngl 0` command-line
68
58
argument.
@@ -159,9 +149,9 @@ The environment variable `GGML_CUDA_ENABLE_UNIFIED_MEMORY=1` can be used to enab
159
149
160
150
Most of the compilation options available for CUDA should also be available for MUSA, though they haven't been thoroughly tested yet.
161
151
162
-
### hipBLAS
152
+
### HIP
163
153
164
-
This provides BLAS acceleration on HIP-supported AMD GPUs.
154
+
This provides GPU acceleration on HIP-supported AMD GPUs.
165
155
Make sure to have ROCm installed.
166
156
You can download it from your Linux distro's package manager or from here: [ROCm Quick Start (Linux)](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/tutorial/quick-start.html#rocm-install-quick).
167
157
@@ -227,6 +217,12 @@ EOF
227
217
228
218
```
229
219
220
+
Switch into `llama.cpp` directory and build using CMake.
221
+
```sh
222
+
cmake -B build -DGGML_VULKAN=ON
223
+
cmake --build build --config Release
224
+
```
225
+
230
226
#### Git Bash MINGW64
231
227
232
228
Download and install [`Git-SCM`](https://git-scm.com/downloads/win) with the default settings
0 commit comments