Skip to content

Commit 4399f13

Browse files
committed
server : remove obsolete --memory-f32 option
1 parent 1a43c72 commit 4399f13

File tree

2 files changed

+0
-3
lines changed

2 files changed

+0
-3
lines changed

examples/server/README.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,6 @@ The project is under active development, and we are [looking for feedback and co
3030
- `-ts SPLIT, --tensor-split SPLIT`: When using multiple GPUs, this option controls how large tensors should be split across all GPUs. `SPLIT` is a comma-separated list of non-negative values that assigns the proportion of data that each GPU should get in order. For example, "3,2" will assign 60% of the data to GPU 0 and 40% to GPU 1. By default, the data is split in proportion to VRAM, but this may not be optimal for performance.
3131
- `-b N`, `--batch-size N`: Set the batch size for prompt processing. Default: `2048`
3232
- `-ub N`, `--ubatch-size N`: Physical maximum batch size. Default: `512`
33-
- `--memory-f32`: Use 32-bit floats instead of 16-bit floats for memory key+value. Not recommended.
3433
- `--mlock`: Lock the model in memory, preventing it from being swapped out when memory-mapped.
3534
- `--no-mmap`: Do not memory-map the model. By default, models are mapped into memory, which allows the system to load only the necessary parts of the model as needed.
3635
- `--numa STRATEGY`: Attempt one of the below optimization strategies that may help on some NUMA systems

examples/server/server.cpp

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2189,8 +2189,6 @@ static void server_print_usage(const char * argv0, const gpt_params & params, co
21892189
printf(" KV cache defragmentation threshold (default: %.1f, < 0 - disabled)\n", params.defrag_thold);
21902190
printf(" -b N, --batch-size N logical maximum batch size (default: %d)\n", params.n_batch);
21912191
printf(" -ub N, --ubatch-size N physical maximum batch size (default: %d)\n", params.n_ubatch);
2192-
printf(" --memory-f32 use f32 instead of f16 for memory key+value (default: disabled)\n");
2193-
printf(" not recommended: doubles context memory required and no measurable increase in quality\n");
21942192
if (llama_supports_mlock()) {
21952193
printf(" --mlock force system to keep model in RAM rather than swapping or compressing\n");
21962194
}

0 commit comments

Comments
 (0)