Skip to content

Commit 6da2df3

Browse files
authored
Update README.md
1 parent 9dcf4db commit 6da2df3

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -139,5 +139,5 @@ python3 convert-pth-to-ggml.py models/7B/ 1
139139
In general, it seems to work, but I think it fails for unicode character support. Hopefully, someone can help with that
140140
- I don't know yet how much the quantization affects the quality of the generated text
141141
- Probably the token sampling can be improved
142-
- x86 quantization support [not yet ready](https://github.com/ggerganov/ggml/pull/27). Basically, you want to run this on Apple Silicon
142+
- x86 quantization support [not yet ready](https://github.com/ggerganov/ggml/pull/27). Basically, you want to run this on Apple Silicon. For now, on Linux and Windows you can use the F16 `ggml-model-f16.bin` model, but it will be much slower.
143143

0 commit comments

Comments
 (0)