Skip to content

Commit e07cdc9

Browse files
cbilginBilgin Cagatay
authored andcommitted
Just doc changes (#453)
Co-authored-by: Bilgin Cagatay <[email protected]>
1 parent c33a01b commit e07cdc9

File tree

4 files changed

+11
-11
lines changed

4 files changed

+11
-11
lines changed

README.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
# Chat with LLMs Everywhere
2-
Torchchat is a small codebase to showcase running large language models (LLMs) within Python OR within your own (C/C++) application on mobile (iOS/Android), desktop and servers.
2+
Torchchat is a compact codebase to showcase the capability of running large language models (LLMs) seamlessly across diverse platforms. With Torchchat, you could run LLMs from with Python, your own (C/C++) application on mobile (iOS/Android), desktop or servers.
33

44
## Highlights
55
- Command line interaction with popular LLMs such as Llama 3, Llama 2, Stories, Mistral and more
6-
- Supporting [some GGUF files](docs/GGUF.md) and the Hugging Face checkpoint format
6+
- Supports [common GGUF formats](docs/GGUF.md) and the Hugging Face checkpoint format
77
- PyTorch-native execution with performance
88
- Supports popular hardware and OS
99
- Linux (x86)
@@ -59,16 +59,16 @@ with `python3 torchchat.py remove llama3`.
5959
* [Chat](#chat)
6060
* [Generate](#generate)
6161
* [Run via Browser](#browser)
62-
* [Quantizing your model (suggested for mobile)](#quantization)
62+
* [Quantize your models (suggested for mobile)](#quantization)
6363
* Export and run models in native environments (C++, your own app, mobile, etc.)
64-
* [Exporting for desktop/servers via AOTInductor](#export-server)
65-
* [Running exported .so file via your own C++ application](#run-server)
64+
* [Export for desktop/servers via AOTInductor](#export-server)
65+
* [Run exported .so file via your own C++ application](#run-server)
6666
* in Chat mode
6767
* in Generate mode
68-
* [Exporting for mobile via ExecuTorch](#export-executorch)
68+
* [Export for mobile via ExecuTorch](#export-executorch)
6969
* in Chat mode
7070
* in Generate mode
71-
* [Running exported executorch file on iOS or Android](#run-mobile)
71+
* [Run exported ExecuTorch file on iOS or Android](#run-mobile)
7272

7373
## Models
7474
These are the supported models
@@ -242,7 +242,7 @@ python3 torchchat.py export stories15M --output-pte-path stories15M.pte
242242
python3 torchchat.py generate --device cpu --pte-path stories15M.pte --prompt "Hello my name is"
243243
```
244244

245-
See below under Mobile Execution if you want to deploy and execute a model in your iOS or Android app.
245+
See below under [Mobile Execution](#run-mobile) if you want to deploy and execute a model in your iOS or Android app.
246246

247247

248248
## Quantization

docs/Android.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
Check out the [tutorial on how to build an Android app running your
44
PyTorch models with
55
ExecuTorch](https://pytorch.org/executorch/main/llm/llama-demo-android.html),
6-
and give your torchat models a spin.
6+
and give your torchchat models a spin.
77

88
![Screenshot](https://pytorch.org/executorch/main/_static/img/android_llama_app.png "Android app running Llama model")
99

docs/executorch_setup.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Set-up executorch
1+
# Set-up ExecuTorch
22

33
Before running any commands in torchchat that require ExecuTorch, you must first install ExecuTorch.
44

docs/quantization.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -284,4 +284,4 @@ We invite contributors to submit established quantization schemes, with accuracy
284284
- Describe how to choose a quantization scheme. Which factors should they take into account? Concrete recommendations for use cases, esp. mobile.
285285
- Quantization reference, describe options for --quantize parameter
286286
- Show a table with performance/accuracy metrics
287-
- Quantization support matrix? torchat Quantization Support Matrix
287+
- Quantization support matrix? torchchat Quantization Support Matrix

0 commit comments

Comments
 (0)