Skip to content

Commit a07c2c8

Browse files
authored
docs : Update readme to build targets for local docker build (#11368)
1 parent 8137b4b commit a07c2c8

File tree

3 files changed

+8
-8
lines changed

3 files changed

+8
-8
lines changed

docs/backend/SYCL.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,7 @@ The docker build option is currently limited to *intel GPU* targets.
133133
### Build image
134134
```sh
135135
# Using FP16
136-
docker build -t llama-cpp-sycl --build-arg="GGML_SYCL_F16=ON" -f .devops/llama-cli-intel.Dockerfile .
136+
docker build -t llama-cpp-sycl --build-arg="GGML_SYCL_F16=ON" --target light -f .devops/intel.Dockerfile .
137137
```
138138

139139
*Notes*:

docs/build.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -286,7 +286,7 @@ You don't need to install Vulkan SDK. It will be installed inside the container.
286286

287287
```sh
288288
# Build the image
289-
docker build -t llama-cpp-vulkan -f .devops/llama-cli-vulkan.Dockerfile .
289+
docker build -t llama-cpp-vulkan --target light -f .devops/vulkan.Dockerfile .
290290

291291
# Then, use it:
292292
docker run -it --rm -v "$(pwd):/app:Z" --device /dev/dri/renderD128:/dev/dri/renderD128 --device /dev/dri/card1:/dev/dri/card1 llama-cpp-vulkan -m "/app/models/YOUR_MODEL_FILE" -p "Building a website can be done in 10 simple steps:" -n 400 -e -ngl 33

docs/docker.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -60,9 +60,9 @@ Assuming one has the [nvidia-container-toolkit](https://github.com/NVIDIA/nvidia
6060
## Building Docker locally
6161

6262
```bash
63-
docker build -t local/llama.cpp:full-cuda -f .devops/full-cuda.Dockerfile .
64-
docker build -t local/llama.cpp:light-cuda -f .devops/llama-cli-cuda.Dockerfile .
65-
docker build -t local/llama.cpp:server-cuda -f .devops/llama-server-cuda.Dockerfile .
63+
docker build -t local/llama.cpp:full-cuda --target full -f .devops/cuda.Dockerfile .
64+
docker build -t local/llama.cpp:light-cuda --target light -f .devops/cuda.Dockerfile .
65+
docker build -t local/llama.cpp:server-cuda --target server -f .devops/cuda.Dockerfile .
6666
```
6767

6868
You may want to pass in some different `ARGS`, depending on the CUDA environment supported by your container host, as well as the GPU architecture.
@@ -95,9 +95,9 @@ Assuming one has the [mt-container-toolkit](https://developer.mthreads.com/musa/
9595
## Building Docker locally
9696

9797
```bash
98-
docker build -t local/llama.cpp:full-musa -f .devops/full-musa.Dockerfile .
99-
docker build -t local/llama.cpp:light-musa -f .devops/llama-cli-musa.Dockerfile .
100-
docker build -t local/llama.cpp:server-musa -f .devops/llama-server-musa.Dockerfile .
98+
docker build -t local/llama.cpp:full-musa --target full -f .devops/musa.Dockerfile .
99+
docker build -t local/llama.cpp:light-musa --target light -f .devops/musa.Dockerfile .
100+
docker build -t local/llama.cpp:server-musa --target server -f .devops/musa.Dockerfile .
101101
```
102102

103103
You may want to pass in some different `ARGS`, depending on the MUSA environment supported by your container host, as well as the GPU architecture.

0 commit comments

Comments
 (0)