Skip to content

Commit 051633e

Browse files
author
Olivier Chafik
committed
update dockerfile refs
1 parent 1cc6514 commit 051633e

File tree

2 files changed

+3
-3
lines changed

2 files changed

+3
-3
lines changed

README-sycl.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -99,7 +99,7 @@ The docker build option is currently limited to *intel GPU* targets.
9999
### Build image
100100
```sh
101101
# Using FP16
102-
docker build -t llama-cpp-sycl --build-arg="LLAMA_SYCL_F16=ON" -f .devops/llama-intel.Dockerfile .
102+
docker build -t llama-cpp-sycl --build-arg="LLAMA_SYCL_F16=ON" -f .devops/llama-cli-intel.Dockerfile .
103103
```
104104

105105
*Notes*:

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -555,7 +555,7 @@ Building the program with BLAS support may lead to some performance improvements
555555
556556
```sh
557557
# Build the image
558-
docker build -t llama-cpp-vulkan -f .devops/llama-vulkan.Dockerfile .
558+
docker build -t llama-cpp-vulkan -f .devops/llama-cli-vulkan.Dockerfile .
559559
560560
# Then, use it:
561561
docker run -it --rm -v "$(pwd):/app:Z" --device /dev/dri/renderD128:/dev/dri/renderD128 --device /dev/dri/card1:/dev/dri/card1 llama-cpp-vulkan -m "/app/models/YOUR_MODEL_FILE" -p "Building a website can be done in 10 simple steps:" -n 400 -e -ngl 33
@@ -907,7 +907,7 @@ Assuming one has the [nvidia-container-toolkit](https://github.com/NVIDIA/nvidia
907907

908908
```bash
909909
docker build -t local/llama.cpp:full-cuda -f .devops/full-cuda.Dockerfile .
910-
docker build -t local/llama.cpp:light-cuda -f .devops/llama-cuda.Dockerfile .
910+
docker build -t local/llama.cpp:light-cuda -f .devops/llama-cli-cuda.Dockerfile .
911911
docker build -t local/llama.cpp:server-cuda -f .devops/llama-server-cuda.Dockerfile .
912912
```
913913

0 commit comments

Comments
 (0)