Skip to content

Commit a8a6802

Browse files
ericcurtinmglambda
authored andcommitted
llama-run : fix context size (ggml-org#11094)
Set `n_ctx` equal to `n_batch` in `Opt` class. Now context size is a more reasonable 2048. Signed-off-by: Eric Curtin <[email protected]>
1 parent 4ef3cda commit a8a6802

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

examples/run/run.cpp

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -83,6 +83,7 @@ class Opt {
8383
}
8484

8585
ctx_params.n_batch = context_size >= 0 ? context_size : context_size_default;
86+
ctx_params.n_ctx = ctx_params.n_batch;
8687
model_params.n_gpu_layers = ngl >= 0 ? ngl : ngl_default;
8788
temperature = temperature >= 0 ? temperature : temperature_default;
8889

0 commit comments

Comments
 (0)