Skip to content

Commit 5a0b621

Browse files
authored
Update README.md (#366)
python => python3, as per @malfet alias for python nopt available on all systems (incl Ubuntu)
1 parent 9ce3698 commit 5a0b621

File tree

1 file changed

+23
-23
lines changed

1 file changed

+23
-23
lines changed

README.md

Lines changed: 23 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ Torchchat is an easy-to-use library for running large language models (LLMs) on
1616

1717
## Quick Start
1818
### Initialize the Environment
19-
The following steps requires you have [Python 3.10](https://www.python.org/downloads/release/python-3100/) installed
19+
The following steps require that you have [Python 3.10](https://www.python.org/downloads/release/python-3100/) installed.
2020

2121
```
2222
# get the code
@@ -31,20 +31,20 @@ source .venv/bin/activate
3131
./install_requirements.sh
3232
3333
# ensure everything installed correctly
34-
python torchchat.py --help
34+
python3 torchchat.py --help
3535
3636
```
3737

3838
### Generating Text
3939

4040
```
41-
python torchchat.py generate stories15M
41+
python3 torchchat.py generate stories15M
4242
```
4343
That’s all there is to it!
4444
Read on to learn how to use the full power of torchchat.
4545

4646
## Customization
47-
For the full details on all commands and parameters run `python torchchat.py --help`
47+
For the full details on all commands and parameters run `python3 torchchat.py --help`
4848

4949
### Download
5050
For supported models, torchchat can download model weights. Most models use HuggingFace as the distribution channel, so you will need to create a HuggingFace
@@ -54,46 +54,46 @@ To install `huggingface-cli`, run `pip install huggingface-cli`. After installin
5454
HuggingFace.
5555

5656
```
57-
python torchchat.py download llama3
57+
python3 torchchat.py download llama3
5858
```
5959

6060
### Chat
6161
Designed for interactive and conversational use.
6262
In chat mode, the LLM engages in a back-and-forth dialogue with the user. It responds to queries, participates in discussions, provides explanations, and can adapt to the flow of conversation.
6363

64-
For more information run `python torchchat.py chat --help`
64+
For more information run `python3 torchchat.py chat --help`
6565

6666
**Examples**
6767
```
68-
python torchchat.py chat llama3 --tiktoken
68+
python3 torchchat.py chat llama3 --tiktoken
6969
```
7070

7171
### Generate
7272
Aimed at producing content based on specific prompts or instructions.
7373
In generate mode, the LLM focuses on creating text based on a detailed prompt or instruction. This mode is often used for generating written content like articles, stories, reports, or even creative writing like poetry.
7474

75-
For more information run `python torchchat.py generate --help`
75+
For more information run `python3 torchchat.py generate --help`
7676

7777
**Examples**
7878
```
79-
python torchchat.py generate llama3 --dtype=fp16 --tiktoken
79+
python3 torchchat.py generate llama3 --dtype=fp16 --tiktoken
8080
```
8181

8282
### Export
8383
Compiles a model and saves it to run later.
8484

85-
For more information run `python torchchat.py export --help`
85+
For more information run `python3 torchchat.py export --help`
8686

8787
**Examples**
8888

8989
AOT Inductor:
9090
```
91-
python torchchat.py export stories15M --output-dso-path stories15M.so
91+
python3 torchchat.py export stories15M --output-dso-path stories15M.so
9292
```
9393

9494
ExecuTorch:
9595
```
96-
python torchchat.py export stories15M --output-pte-path stories15M.pte
96+
python3 torchchat.py export stories15M --output-pte-path stories15M.pte
9797
```
9898

9999
### Browser
@@ -102,7 +102,7 @@ Run a chatbot in your browser that’s supported by the model you specify in the
102102
**Examples**
103103

104104
```
105-
python torchchat.py browser stories15M --temperature 0 --num-samples 10
105+
python3 torchchat.py browser stories15M --temperature 0 --num-samples 10
106106
```
107107

108108
*Running on http://127.0.0.1:5000* should be printed out on the terminal. Click the link or go to [http://127.0.0.1:5000](http://127.0.0.1:5000) on your browser to start interacting with it.
@@ -112,19 +112,19 @@ Enter some text in the input box, then hit the enter key or click the “SEND”
112112
### Eval
113113
Uses lm_eval library to evaluate model accuracy on a variety of tasks. Defaults to wikitext and can be manually controlled using the tasks and limit args.
114114

115-
For more information run `python torchchat.py eval --help`
115+
For more information run `python3 torchchat.py eval --help`
116116

117117
**Examples**
118118

119119
Eager mode:
120120
```
121-
python torchchat.py eval stories15M -d fp32 --limit 5
121+
python3 torchchat.py eval stories15M -d fp32 --limit 5
122122
```
123123

124124
To test the perplexity for lowered or quantized model, pass it in the same way you would to generate:
125125

126126
```
127-
python torchchat.py eval stories15M --pte-path stories15M.pte --limit 5
127+
python3 torchchat.py eval stories15M --pte-path stories15M.pte --limit 5
128128
```
129129

130130
## Models
@@ -153,17 +153,17 @@ See the [documentation on GGUF](docs/GGUF.md) to learn how to use GGUF files.
153153

154154
```
155155
# Llama 3 8B Instruct
156-
python torchchat.py chat llama3 --tiktoken
156+
python3 torchchat.py chat llama3 --tiktoken
157157
```
158158

159159
```
160160
# Stories 15M
161-
python torchchat.py chat stories15M
161+
python3 torchchat.py chat stories15M
162162
```
163163

164164
```
165165
# CodeLama 7B for Python
166-
python torchchat.py chat codellama
166+
python3 torchchat.py chat codellama
167167
```
168168

169169
## Desktop Execution
@@ -175,10 +175,10 @@ AOT compiles models into machine code before execution, enhancing performance an
175175
The following example uses the Stories15M model.
176176
```
177177
# Compile
178-
python torchchat.py export stories15M --output-dso-path stories15M.so
178+
python3 torchchat.py export stories15M --output-dso-path stories15M.so
179179
180180
# Execute
181-
python torchchat.py generate --dso-path stories15M.so --prompt "Hello my name is"
181+
python3 torchchat.py generate --dso-path stories15M.so --prompt "Hello my name is"
182182
```
183183

184184
NOTE: The exported model will be large. We suggest you quantize the model, explained further down, before deploying the model on device.
@@ -190,10 +190,10 @@ ExecuTorch enables you to optimize your model for execution on a mobile or embed
190190
The following example uses the Stories15M model.
191191
```
192192
# Compile
193-
python torchchat.py export stories15M --output-pte-path stories15M.pte
193+
python3 torchchat.py export stories15M --output-pte-path stories15M.pte
194194
195195
# Execute
196-
python torchchat.py generate --device cpu --pte-path stories15M.pte --prompt "Hello my name is"
196+
python3 torchchat.py generate --device cpu --pte-path stories15M.pte --prompt "Hello my name is"
197197
```
198198

199199
See below under Mobile Execution if you want to deploy and execute a model in your iOS or Android app.

0 commit comments

Comments
 (0)