Skip to content

Commit f62faf3

Browse files
amakropoulosarthw
authored andcommitted
readme : add LLMUnity to UI projects (ggml-org#9381)
* add LLMUnity to UI projects * add newline to examples/rpc/README.md to fix editorconfig-checker unit test
1 parent feb3786 commit f62faf3

File tree

2 files changed

+3
-1
lines changed

2 files changed

+3
-1
lines changed

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -197,6 +197,7 @@ Unless otherwise noted these projects are open-source with permissive licensing:
197197
- [AI Sublime Text plugin](https://github.com/yaroslavyaroslav/OpenAI-sublime-text) (MIT)
198198
- [AIKit](https://github.com/sozercan/aikit) (MIT)
199199
- [LARS - The LLM & Advanced Referencing Solution](https://github.com/abgulati/LARS) (AGPL)
200+
- [LLMUnity](https://github.com/undreamai/LLMUnity) (MIT)
200201

201202
*(to have a project listed here, it should clearly state that it depends on `llama.cpp`)*
202203

examples/rpc/README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,4 +70,5 @@ Finally, when running `llama-cli`, use the `--rpc` option to specify the host an
7070
$ bin/llama-cli -m ../models/tinyllama-1b/ggml-model-f16.gguf -p "Hello, my name is" --repeat-penalty 1.0 -n 64 --rpc 192.168.88.10:50052,192.168.88.11:50052 -ngl 99
7171
```
7272

73-
This way you can offload model layers to both local and remote devices.
73+
This way you can offload model layers to both local and remote devices.
74+

0 commit comments

Comments
 (0)