Skip to content

Commit 01eeed8

Browse files
authored
Update README.md
1 parent 6da2df3 commit 01eeed8

File tree

1 file changed

+2
-0
lines changed

1 file changed

+2
-0
lines changed

README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,8 @@ Inference of [Facebook's LLaMA](https://github.com/facebookresearch/llama) model
55
**TEMPORARY NOTICE:**
66
If you observe garbage results, make sure to update to latest master. There was a bug and it was fixed here: https://github.com/ggerganov/llama.cpp/commit/70bc0b8b15b98dca23b28f0c8f5e34b27e424cda
77

8+
Also, currently the quantized models run **only** on Apple Silicon. On other architectures, you can [use the F16 models](https://github.com/ggerganov/llama.cpp/issues/2#issuecomment-1464615286), but they will be much slower. Support will be [added later](https://github.com/ggerganov/ggml/pull/27)
9+
810
## Description
911

1012
The main goal is to run the model using 4-bit quantization on a MacBook.

0 commit comments

Comments
 (0)