We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 7824380 commit f76cb3aCopy full SHA for f76cb3a
README.md
@@ -9,7 +9,7 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++
9
10
**Hot topics:**
11
12
-- [Add GPU support to ggml](https://github.com/ggerganov/llama.cpp/issues/914)
+- [Add GPU support to ggml](https://github.com/ggerganov/llama.cpp/discussions/915)
13
- [Roadmap Apr 2023](https://github.com/ggerganov/llama.cpp/discussions/784)
14
15
## Description
0 commit comments