We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent fe60904 commit 56551bcCopy full SHA for 56551bc
README.md
@@ -7,6 +7,14 @@
7
8
Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++
9
10
+## ⚠️ TEMPORARY NOTICE ABOUT UPCOMING BREAKING CHANGE ⚠️
11
+
12
+**The quantization formats will soon be updated: https://github.com/ggerganov/llama.cpp/pull/1305**
13
14
+**All `ggml` model files using the old format will not work with the latest `llama.cpp` code after that change is merged**
15
16
+---
17
18
**Hot topics:**
19
20
- [Roadmap May 2023](https://github.com/ggerganov/llama.cpp/discussions/1220)
0 commit comments