Skip to content

Commit 6f1ee4b

Browse files
authored
Fix crash for 65B model with pre-allocated memory (#485)
1 parent 8520fc3 commit 6f1ee4b

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

llama.cpp

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -239,7 +239,7 @@ static bool kv_cache_init(
239239
const int n_mem = n_layer*n_ctx;
240240
const int n_elements = n_embd*n_mem;
241241

242-
cache.buf.resize(2*n_elements*ggml_type_size(wtype) + 2u*MB);
242+
cache.buf.resize(2u*n_elements*ggml_type_size(wtype) + 2u*MB);
243243

244244
struct ggml_init_params params;
245245
params.mem_size = cache.buf.size();

0 commit comments

Comments
 (0)