You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+9-9Lines changed: 9 additions & 9 deletions
Original file line number
Diff line number
Diff line change
@@ -80,7 +80,7 @@ with `python3 torchchat.py remove llama3`.
80
80
* in Chat mode
81
81
* in Generate mode
82
82
* Fine-tuned models from torchtune
83
-
83
+
84
84
85
85
## Running via PyTorch / Python
86
86
@@ -92,7 +92,7 @@ In chat mode, the LLM engages in a back-and-forth dialogue with the user. It res
92
92
93
93
```bash
94
94
# Llama 3 8B Instruct
95
-
python3 torchchat.py chat llama3
95
+
python3 torchchat.py chat llama3
96
96
```
97
97
98
98
```
@@ -134,11 +134,11 @@ Enter some text in the input box, then hit the enter key or click the “SEND”
134
134
135
135
Quantization is the process of converting a model into a more memory-efficient representation. Quantization is particularly important for accelerators -- to take advantage of the available memory bandwidth, and fit in the often limited high-speed memory in accelerators – and mobile devices – to fit in the typically very limited memory of mobile devices.
136
136
137
-
Depending on the model and the target device, different quantization recipes may be applied. torchchat contains two example configurations to optimize performance for GPU-based systems `config/data/qconfig_gpu.json`, and mobile systems `config/data/qconfig_mobile.json`. The GPU configuration is targeted towards optimizing for memory bandwidth which is a scarce resource in powerful GPUs (and to a less degree, memory footprint to fit large models into a device's memory). The mobile configuration is targeted towards optimizing for memory fotoprint because in many devices, a single application is limited to as little as GB or less of memory.
137
+
Depending on the model and the target device, different quantization recipes may be applied. torchchat contains two example configurations to optimize performance for GPU-based systems `config/data/cuda.json`, and mobile systems `config/data/mobile.json`. The GPU configuration is targeted towards optimizing for memory bandwidth which is a scarce resource in powerful GPUs (and to a less degree, memory footprint to fit large models into a device's memory). The mobile configuration is targeted towards optimizing for memory fotoprint because in many devices, a single application is limited to as little as GB or less of memory.
138
138
139
139
You can use the quantization recipes in conjunction with any of the `chat`, `generate` and `browser` commands to test their impact and accelerate model execution. You will apply these recipes to the `export` comamnds below, to optimize the exported models. For example:
python3 torchchat.py generate llama3 --quantize config/data/qconfig_gpu.json--dso-path llama3.so --prompt "Hello my name is"
160
+
python3 torchchat.py generate llama3 --quantize config/data/cuda.json--dso-path llama3.so --prompt "Hello my name is"
161
161
```
162
162
163
-
NOTE: We use `--quantize config/data/qconfig_gpu.json` to quantize the llama3 model to reduce model size and improve performance for on-device use cases.
163
+
NOTE: We use `--quantize config/data/cuda.json` to quantize the llama3 model to reduce model size and improve performance for on-device use cases.
164
164
165
165
**Build Native Runner Binary**
166
166
@@ -185,12 +185,12 @@ Before running ExecuTorch commands, you must first set-up ExecuTorch in torchcha
python3 torchchat.py generate llama3 --device cpu --pte-path llama3.pte --prompt "Hello my name is"
192
192
```
193
-
NOTE: We use `--quantize config/data/qconfig_mobile.json` to quantize the llama3 model to reduce model size and improve performance for on-device use cases.
193
+
NOTE: We use `--quantize config/data/mobile.json` to quantize the llama3 model to reduce model size and improve performance for on-device use cases.
194
194
195
195
See below under [Mobile Execution](#mobile-execution) if you want to deploy and execute a model in your iOS or Android app.
196
196
@@ -336,6 +336,6 @@ you've built around local LLM inference.
336
336
337
337
338
338
## License
339
-
torchchat is released under the [BSD 3 license](LICENSE). (Additional code in this
339
+
torchchat is released under the [BSD 3 license](LICENSE). (Additional code in this
340
340
distribution is covered by the MIT and Apache Open Source licenses.) However you may have other legal obligations
341
341
that govern your use of content, such as the terms of service for third-party models.
Copy file name to clipboardExpand all lines: docs/quantization.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -29,7 +29,7 @@ Due to the larger vocabulary size of llama3, we also recommend quantizing the em
29
29
***[GPTQ](https://arxiv.org/abs/2210.17323) and [HQQ](https://mobiusml.github.io/hqq_blog/) are two different algorithms to address accuracy loss when using lower bit quantization. Due to HQQ relying on data/calibration free quantization, it tends to take less time to quantize model.
30
30
31
31
## Quantization API
32
-
Quantization options are passed in json format either as a config file (see [qconfig_gpu.json](../config/data/qconfig_gpu.json) and [qconfig_mobile.json](../config/data/qconfig_mobile.json)) or a JSON string.
32
+
Quantization options are passed in json format either as a config file (see [cuda.json](../config/data/cuda.json) and [mobile.json](../config/data/mobile.json)) or a JSON string.
33
33
34
34
The expected JSON format is described below. Refer to the tables above for valid `bitwidth` and `groupsize` values.
0 commit comments