Skip to content

Commit 7705291

Browse files
ggerganovdsx1986
authored andcommitted
readme : fix typo [no ci]
1 parent c301e4f commit 7705291

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

examples/main/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ In this section, we cover the most commonly used options for running the `llama-
6969
- `-c N, --ctx-size N`: Set the size of the prompt context. The default is 512, but LLaMA models were built with a context of 2048, which will provide better results for longer input/inference.
7070
- `-mli, --multiline-input`: Allows you to write or paste multiple lines without ending each in '\'
7171
- `-t N, --threads N`: Set the number of threads to use during generation. For optimal performance, it is recommended to set this value to the number of physical CPU cores your system has.
72-
- - `-ngl N, --n-gpu-layers N`: When compiled with GPU support, this option allows offloading some layers to the GPU for computation. Generally results in increased performance.
72+
- `-ngl N, --n-gpu-layers N`: When compiled with GPU support, this option allows offloading some layers to the GPU for computation. Generally results in increased performance.
7373

7474
## Input Prompts
7575

0 commit comments

Comments
 (0)