Skip to content

Commit 67752ee

Browse files
mergennachinfacebook-github-bot
authored andcommitted
Add llama animated gif to llama readme (#5474)
Summary: Pull Request resolved: #5474 bypass-github-export-checks bypass-github-pytorch-ci-checks bypass-github-executorch-ci-checks allow-large-files Reviewed By: lucylq, guangy10 Differential Revision: D62968545 fbshipit-source-id: da7554ba40f2460adfd4ea178431610e8b831587
1 parent 61e5d4c commit 67752ee

File tree

2 files changed

+8
-0
lines changed

2 files changed

+8
-0
lines changed

examples/models/llama2/README.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,14 @@ Please note that the models are subject to the [Llama 2 Acceptable Use Policy](h
1919

2020
Since Llama 2 7B or Llama 3 8B model needs at least 4-bit quantization to fit even within some of the highend phones, results presented here correspond to 4-bit groupwise post-training quantized model.
2121

22+
<p align="center">
23+
<img src="./llama_via_xnnpack.gif" width=300>
24+
<br>
25+
<em>
26+
Running Llama3.1 8B on Android phone
27+
</em>
28+
</p>
29+
2230
## Quantization:
2331
We employed 4-bit groupwise per token dynamic quantization of all the linear layers of the model. Dynamic quantization refers to quantizating activations dynamically, such that quantization parameters for activations are calculated, from min/max range, at runtime. Here we quantized activations with 8bits (signed integer). Furthermore, weights are statically quantized. In our case weights were per-channel groupwise quantized with 4bit signed integer. For more information refer to this [page](https://github.com/pytorch/ao).
2432

6.83 MB
Loading

0 commit comments

Comments
 (0)