Skip to content

Commit e1fb003

Browse files
authored
Update README.md (#916)
Few editorial edits, and added links.
1 parent ee681bf commit e1fb003

File tree

1 file changed

+13
-11
lines changed

1 file changed

+13
-11
lines changed

README.md

Lines changed: 13 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -56,32 +56,32 @@ source .venv/bin/activate
5656

5757
[shell default]: ./install_requirements.sh
5858

59-
Installations can be tested by
59+
Installations can be tested by running
6060

6161
```bash
6262
# ensure everything installed correctly
6363
python3 torchchat.py --help
6464
```
6565

6666
### Download Weights
67-
Most models use HuggingFace as the distribution channel, so you will need to create a HuggingFace account.
67+
Most models use Hugging Face as the distribution channel, so you will need to create a Hugging Face account.
6868

6969
[prefix default]: HF_TOKEN="${SECRET_HF_TOKEN_PERIODIC}"
70-
Create a HuggingFace user access token [as documented here](https://huggingface.co/docs/hub/en/security-tokens) with the `write` role.
71-
Log into huggingface:
70+
Create a Hugging Face user access token [as documented here](https://huggingface.co/docs/hub/en/security-tokens) with the `write` role.
71+
Log into Hugging Face:
7272
```
7373
huggingface-cli login
7474
```
7575

7676
Once this is done, torchchat will be able to download model artifacts from
77-
HuggingFace.
77+
Hugging Face.
7878

7979
```
8080
python3 torchchat.py download llama3
8181
```
8282

83-
*NOTE: This command may prompt you to request access to llama3 via
84-
HuggingFace, if you do not already have access. Simply follow the
83+
*NOTE: This command may prompt you to request access to Llama 3 via
84+
Hugging Face, if you do not already have access. Simply follow the
8585
prompts and re-run the command when access is granted.*
8686

8787
View available models with:
@@ -99,9 +99,10 @@ Finally, you can also remove downloaded models with the remove command:
9999

100100

101101
## Running via PyTorch / Python
102-
[Follow the installation steps if you haven't](#installation)
102+
[Follow the installation steps if you haven't.](#installation)
103103

104104
### Chat
105+
This mode allows you to chat with an LLM in an interactive fashion.
105106
[skip default]: begin
106107
```bash
107108
# Llama 3 8B Instruct
@@ -112,6 +113,7 @@ python3 torchchat.py chat llama3
112113
For more information run `python3 torchchat.py chat --help`
113114

114115
### Generate
116+
This mode generates text based on an input prompt.
115117
```bash
116118
python3 torchchat.py generate llama3 --prompt "write me a story about a boy and his bear"
117119
```
@@ -120,7 +122,7 @@ For more information run `python3 torchchat.py generate --help`
120122

121123

122124
### Browser
123-
125+
This mode provides access to the model via the browser's localhost.
124126
[skip default]: begin
125127
```
126128
python3 torchchat.py browser llama3
@@ -143,7 +145,7 @@ conversation.
143145
## Desktop/Server Execution
144146

145147
### AOTI (AOT Inductor)
146-
AOT compiles models before execution for faster inference
148+
AOT compiles models before execution for faster inference (read more about AOTI [here](https://pytorch.org/blog/pytorch2-2/)).
147149

148150
The following example exports and executes the Llama3 8B Instruct
149151
model. The first command performs the actual export, the second
@@ -179,7 +181,7 @@ cmake-out/aoti_run exportedModels/llama3.so -z `python3 torchchat.py where llama
179181

180182
## Mobile Execution
181183

182-
ExecuTorch enables you to optimize your model for execution on a
184+
[ExecuTorch] (https://github.com/pytorch/executorch) enables you to optimize your model for execution on a
183185
mobile or embedded device, but can also be used on desktop for
184186
testing.
185187

0 commit comments

Comments
 (0)