File tree Expand file tree Collapse file tree 2 files changed +3
-3
lines changed Expand file tree Collapse file tree 2 files changed +3
-3
lines changed Original file line number Diff line number Diff line change @@ -99,7 +99,7 @@ The docker build option is currently limited to *intel GPU* targets.
99
99
### Build image
100
100
``` sh
101
101
# Using FP16
102
- docker build -t llama-cpp-sycl --build-arg=" LLAMA_SYCL_F16=ON" -f .devops/llama-intel.Dockerfile .
102
+ docker build -t llama-cpp-sycl --build-arg=" LLAMA_SYCL_F16=ON" -f .devops/llama-cli- intel.Dockerfile .
103
103
```
104
104
105
105
* Notes* :
Original file line number Diff line number Diff line change @@ -555,7 +555,7 @@ Building the program with BLAS support may lead to some performance improvements
555
555
556
556
```sh
557
557
# Build the image
558
- docker build -t llama-cpp-vulkan -f .devops/llama-vulkan.Dockerfile .
558
+ docker build -t llama-cpp-vulkan -f .devops/llama-cli- vulkan.Dockerfile .
559
559
560
560
# Then, use it:
561
561
docker run -it --rm -v "$(pwd):/app:Z" --device /dev/dri/renderD128:/dev/dri/renderD128 --device /dev/dri/card1:/dev/dri/card1 llama-cpp-vulkan -m "/app/models/YOUR_MODEL_FILE" -p "Building a website can be done in 10 simple steps:" -n 400 -e -ngl 33
@@ -907,7 +907,7 @@ Assuming one has the [nvidia-container-toolkit](https://github.com/NVIDIA/nvidia
907
907
908
908
``` bash
909
909
docker build -t local/llama.cpp:full-cuda -f .devops/full-cuda.Dockerfile .
910
- docker build -t local/llama.cpp:light-cuda -f .devops/llama-cuda.Dockerfile .
910
+ docker build -t local/llama.cpp:light-cuda -f .devops/llama-cli- cuda.Dockerfile .
911
911
docker build -t local/llama.cpp:server-cuda -f .devops/llama-server-cuda.Dockerfile .
912
912
```
913
913
You can’t perform that action at this time.
0 commit comments