Skip to content

Commit 38797c7

Browse files
committed
docs : remove make references [no ci]
1 parent fb6e8b1 commit 38797c7

File tree

1 file changed

+0
-63
lines changed

1 file changed

+0
-63
lines changed

docs/build.md

Lines changed: 0 additions & 63 deletions
Original file line numberDiff line numberDiff line change
@@ -9,30 +9,6 @@ cd llama.cpp
99

1010
In order to build llama.cpp you have four different options.
1111

12-
- Using `make`:
13-
- On Linux or MacOS:
14-
15-
```bash
16-
make
17-
```
18-
19-
- On Windows (x86/x64 only, arm64 requires cmake):
20-
21-
1. Download the latest fortran version of [w64devkit](https://github.com/skeeto/w64devkit/releases).
22-
2. Extract `w64devkit` on your pc.
23-
3. Run `w64devkit.exe`.
24-
4. Use the `cd` command to reach the `llama.cpp` folder.
25-
5. From here you can run:
26-
```bash
27-
make
28-
```
29-
30-
- Notes:
31-
- For `Q4_0_4_4` quantization type build, add the `GGML_NO_LLAMAFILE=1` flag. For example, use `make GGML_NO_LLAMAFILE=1`.
32-
- For faster compilation, add the `-j` argument to run multiple jobs in parallel. For example, `make -j 8` will run 8 jobs in parallel.
33-
- For faster repeated compilation, install [ccache](https://ccache.dev/).
34-
- For debug builds, run `make LLAMA_DEBUG=1`
35-
3612
- Using `CMake`:
3713

3814
```bash
@@ -104,27 +80,6 @@ This is only available on Mac PCs and it's enabled by default. You can just buil
10480

10581
This provides BLAS acceleration using only the CPU. Make sure to have OpenBLAS installed on your machine.
10682

107-
- Using `make`:
108-
- On Linux:
109-
```bash
110-
make GGML_OPENBLAS=1
111-
```
112-
113-
- On Windows:
114-
115-
1. Download the latest fortran version of [w64devkit](https://github.com/skeeto/w64devkit/releases).
116-
2. Download the latest version of [OpenBLAS for Windows](https://github.com/xianyi/OpenBLAS/releases).
117-
3. Extract `w64devkit` on your pc.
118-
4. From the OpenBLAS zip that you just downloaded copy `libopenblas.a`, located inside the `lib` folder, inside `w64devkit\x86_64-w64-mingw32\lib`.
119-
5. From the same OpenBLAS zip copy the content of the `include` folder inside `w64devkit\x86_64-w64-mingw32\include`.
120-
6. Run `w64devkit.exe`.
121-
7. Use the `cd` command to reach the `llama.cpp` folder.
122-
8. From here you can run:
123-
124-
```bash
125-
make GGML_OPENBLAS=1
126-
```
127-
12883
- Using `CMake` on Linux:
12984

13085
```bash
@@ -167,10 +122,6 @@ This provides GPU acceleration using the CUDA cores of your Nvidia GPU. Make sur
167122
168123
For Jetson user, if you have Jetson Orin, you can try this: [Offical Support](https://www.jetson-ai-lab.com/tutorial_text-generation.html). If you are using an old model(nano/TX2), need some additional operations before compiling.
169124
170-
- Using `make`:
171-
```bash
172-
make GGML_CUDA=1
173-
```
174125
- Using `CMake`:
175126
176127
```bash
@@ -196,10 +147,6 @@ The following compilation options are also available to tweak performance:
196147
197148
This provides GPU acceleration using the MUSA cores of your Moore Threads MTT GPU. Make sure to have the MUSA SDK installed. You can download it from here: [MUSA SDK](https://developer.mthreads.com/sdk/download/musa).
198149
199-
- Using `make`:
200-
```bash
201-
make GGML_MUSA=1
202-
```
203150
- Using `CMake`:
204151
205152
```bash
@@ -219,10 +166,6 @@ This provides BLAS acceleration on HIP-supported AMD GPUs.
219166
Make sure to have ROCm installed.
220167
You can download it from your Linux distro's package manager or from here: [ROCm Quick Start (Linux)](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/tutorial/quick-start.html#rocm-install-quick).
221168
222-
- Using `make`:
223-
```bash
224-
make GGML_HIP=1
225-
```
226169
- Using `CMake` for Linux (assuming a gfx1030-compatible AMD GPU):
227170
```bash
228171
HIPCXX="$(hipconfig -l)/clang" HIP_PATH="$(hipconfig -R)" \
@@ -247,11 +190,6 @@ You can download it from your Linux distro's package manager or from here: [ROCm
247190
&& cmake --build build -- -j 16
248191
```
249192
250-
- Using `make` (example for target gfx1030, build with 16 CPU threads):
251-
```bash
252-
make -j16 GGML_HIP=1 GGML_HIP_UMA=1 AMDGPU_TARGETS=gfx1030
253-
```
254-
255193
- Using `CMake` for Windows (using x64 Native Tools Command Prompt for VS, and assuming a gfx1100-compatible AMD GPU):
256194
```bash
257195
set PATH=%HIP_PATH%\bin;%PATH%
@@ -289,7 +227,6 @@ Libs: -lvulkan-1
289227
EOF
290228
291229
```
292-
Switch into the `llama.cpp` directory and run `make GGML_VULKAN=1`.
293230
294231
#### Git Bash MINGW64
295232

0 commit comments

Comments
 (0)