Skip to content

Quantize llama3 on et export #436

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Apr 26, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 14 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -112,10 +112,13 @@ Designed for interactive graphical conversations using the familiar web browser

Quantization is the process of converting a model into a more memory-efficient representation. Quantization is particularly important for accelerators -- to take advantage of the available memory bandwidth, and fit in the often limited high-speed memory in accelerators – and mobile devices – to fit in the typically very limited memory of mobile devices.

Depending on the model and the target device, different quantization recipes may be applied. Torchchat contains two example configurations to optimize performance for GPU-based systems `config/data/qconfig_gpu.json`, and mobile systems `config/data/qconfig_mobile.json`. The GPU configuration is targeted towards optimizing for memory bandwidth which is a scarce resource in powerful GPUs (and to a less degree, memory footprint to fit large models into a device's memory). The mobile configuration is targeted towards optimizing for memory fotoprint because in many devices, a single application is limited to as little as GB or less of memory.

Depending on the model and the target device, different quantization recipes may be applied. Torchchat contains two example configurations to optimize performance for GPU-based systems `config/data/cuda.json` , and mobile systems `config/data/mobile.json`. The GPU configuration is targeted towards optimizing for memory bandwidth which is a scarce resource in powerful GPUs (and to a less degree, memory footprint to fit large models into a device's memory). The mobile configuration is targeted towards optimizing for memory fotoprint because in many devices, a single application is limited to as little as GB or less of memory.

You can use the quantization recipes in conjunction with any of the `chat`, `generate` and `browser` commands to test their impact and accelerate model execution. You will apply these recipes to the export comamnds below, to optimize the exported models. To adapt these recipes or wrote your own, please refer to the [quantization overview](docs/quantization.md).
You can use the quantization recipes in conjunction with any of the `chat`, `generate` and `browser` commands to test their impact and accelerate model execution. You will apply these recipes to the `export` comamnds below, to optimize the exported models. For example:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did we add eval as well

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll defer to you for adding @jerryzh168

```
python3 torchchat.py chat llama3 --quantize config/data/qconfig_gpu.json
```
To adapt these recipes or wrote your own, please refer to the [quantization overview](docs/quantization.md).

*TO BE REPLACED BY SUITABLE ORDING PROVIDED BY LEGAL:*

Expand Down Expand Up @@ -224,31 +227,32 @@ python3 torchchat.py chat codellama
AOT compiles models into machine code before execution, enhancing performance and predictability. It's particularly beneficial for frequently used models or those requiring quick start times. However, it may lead to larger binary sizes and lacks the runtime flexibility of eager mode.

**Examples**
The following example uses the Stories15M model.
The following example uses the Llama3 8B model.
```
# Compile
python3 torchchat.py export stories15M --output-dso-path stories15M.so
python3 torchchat.py export llama3 --output-dso-path llama3.so

# Execute
python3 torchchat.py generate --dso-path stories15M.so --prompt "Hello my name is"
python3 torchchat.py generate llama3 --quantize config/data/qconfig_gpu.json--dso-path llama3.so --prompt "Hello my name is"
```

NOTE: The exported model will be large. We suggest you quantize the model, explained further down, before deploying the model on device.
NOTE: We use `--quantize config/data/qconfig_gpu.json` to quantize the llama3 model to reduce model size and improve performance for on-device use cases.

### ExecuTorch

ExecuTorch enables you to optimize your model for execution on a mobile or embedded device, but can also be used on desktop for testing.
Before running ExecuTorch commands, you must first set-up ExecuTorch in torchchat, see [Set-up Executorch](docs/executorch_setup.md).

**Examples**
The following example uses the Stories15M model.
The following example uses the Llama3 8B model.
```
# Compile
python3 torchchat.py export stories15M --output-pte-path stories15M.pte
python3 torchchat.py export llama3 --quantize config/data/qconfig_mobile.json --output-pte-path llama3.pte

# Execute
python3 torchchat.py generate --device cpu --pte-path stories15M.pte --prompt "Hello my name is"
python3 torchchat.py generate llama3 --device cpu --pte-path llama3.pte --prompt "Hello my name is"
```
NOTE: We use `--quantize config/data/qconfig_mobile.json` to quantize the llama3 model to reduce model size and improve performance for on-device use cases.

See below under [Mobile Execution](#run-mobile) if you want to deploy and execute a model in your iOS or Android app.

Expand Down
File renamed without changes.
File renamed without changes.
Loading