Skip to content

Commit 2931bb2

Browse files
committed
Update doc
Signed-off-by: Xiaodong Ye <[email protected]>
1 parent 4eb8735 commit 2931bb2

File tree

2 files changed

+14
-0
lines changed

2 files changed

+14
-0
lines changed

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -405,6 +405,7 @@ Please refer to [Build llama.cpp locally](./docs/build.md)
405405
| [BLAS](./docs/build.md#blas-build) | All |
406406
| [BLIS](./docs/backend/BLIS.md) | All |
407407
| [SYCL](./docs/backend/SYCL.md) | Intel and Nvidia GPU |
408+
| [MUSA](./docs/build.md#musa) | Moore Threads GPU |
408409
| [CUDA](./docs/build.md#cuda) | Nvidia GPU |
409410
| [hipBLAS](./docs/build.md#hipblas) | AMD GPU |
410411
| [Vulkan](./docs/build.md#vulkan) | GPU |

docs/build.md

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -179,6 +179,19 @@ The environment variable [`CUDA_VISIBLE_DEVICES`](https://docs.nvidia.com/cuda/c
179179
| GGML_CUDA_PEER_MAX_BATCH_SIZE | Positive integer | 128 | Maximum batch size for which to enable peer access between multiple GPUs. Peer access requires either Linux or NVLink. When using NVLink enabling peer access for larger batch sizes is potentially beneficial. |
180180
| GGML_CUDA_FA_ALL_QUANTS | Boolean | false | Compile support for all KV cache quantization type (combinations) for the FlashAttention CUDA kernels. More fine-grained control over KV cache size but compilation takes much longer. |
181181
182+
### MUSA
183+
184+
- Using `make`:
185+
```bash
186+
make GGML_MUSA=1
187+
```
188+
- Using `CMake`:
189+
190+
```bash
191+
cmake -B build -DGGML_MUSA=ON
192+
cmake --build build --config Release
193+
```
194+
182195
### hipBLAS
183196
184197
This provides BLAS acceleration on HIP-supported AMD GPUs.

0 commit comments

Comments
 (0)