You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Summary:
Pull Request resolved: #3154
All the steps until validating on desktop.
Reviewed By: iseeyuan
Differential Revision: D56358723
fbshipit-source-id: 32d246882d9609840932a7da22c2e3dbf015c0a8
Copy file name to clipboardExpand all lines: examples/models/llama2/README.md
+20-4Lines changed: 20 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -17,9 +17,9 @@ Please note that the models are subject to the [acceptable use policy](https://g
17
17
18
18
# Results
19
19
20
-
Since 7B Llama2 model needs at least 4-bit quantization to fit even within some of the highend phones, results presented here correspond to 4-bit groupwise post-training quantized model.
20
+
Since 7B Llama2 model needs at least 4-bit quantization to fit even within some of the highend phones, results presented here correspond to 4-bit groupwise post-training quantized model.
21
21
22
-
For Llama3, we can use the same process. Note that it's only supported in the ExecuTorch main branch.
22
+
For Llama3, we can use the same process. Note that it's only supported in the ExecuTorch main branch.
23
23
24
24
## Quantization:
25
25
We employed 4-bit groupwise per token dynamic quantization of all the linear layers of the model. Dynamic quantization refers to quantizating activations dynamically, such that quantization parameters for activations are calculated, from min/max range, at runtime. Here we quantized activations with 8bits (signed integer). Furthermore, weights are statically quantized. In our case weights were per-channel groupwise quantized with 4bit signed integer. For more information refer to this [page](https://github.com/pytorch-labs/ao/).
@@ -57,7 +57,7 @@ Performance was measured on Samsung Galaxy S22, S24, One Plus 12 and iPhone 15 m
57
57
- For Llama7b, your device may require at least 32GB RAM. If this is a constraint for you, please try the smaller stories model.
58
58
59
59
## Step 1: Setup
60
-
1. Follow the [tutorial](https://pytorch.org/executorch/main/getting-started-setup) to set up ExecuTorch
60
+
1. Follow the [tutorial](https://pytorch.org/executorch/main/getting-started-setup) to set up ExecuTorch. For installation run `./install_requirements.sh --pybind xnnpack`
61
61
2. Run `examples/models/llama2/install_requirements.sh` to install a few dependencies.
62
62
63
63
## Step 2: Prepare model
@@ -103,6 +103,16 @@ If you want to deploy and run a smaller model for educational purposes. From `ex
0 commit comments