Skip to content

Commit bacc0c8

Browse files
kimishpatelfacebook-github-bot
authored andcommitted
Update results section (#2850)
Summary: Pull Request resolved: #2850 Added quantization and performance summary section Created from CodeHub with https://fburl.com/edit-in-codehub Reviewed By: mergennachin, shoumikhin, kirklandsign Differential Revision: D55761258 fbshipit-source-id: 9c348c38013e71bf6ba2abe118e8ae03b3e4c591
1 parent 88b6cd2 commit bacc0c8

File tree

1 file changed

+24
-1
lines changed

1 file changed

+24
-1
lines changed

examples/models/llama2/README.md

Lines changed: 24 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,30 @@ Please note that the models are subject to the [acceptable use policy](https://g
1717

1818
# Results
1919

20-
TODO - Will fill in table of results.
20+
Since 7B Llama2 model needs at least 4-bit quantization to fit even within some of the highend phones, results presented here correspond to 4-bit groupwise post-training quantized model.
21+
22+
## Quantization:
23+
We employed 4-bit groupwise per token dynamic quantization of all the linear layers of the model. Dynamic quantization refers to quantizating activations dynamically, such that quantization parameters for activations are calculated, from min/max range, at runtime. Here we quantized activations with 8bits (signed integer). Furthermore, weights are statically quantized. In our case weights were per-channel groupwise quantized with 4bit signed integer. For more information refer to this [page](https://pytorch.org/tutorials/recipes/recipes/dynamic_quantization.html).
24+
25+
We evaluated UncycloText perplexity using [LM Eval](https://github.com/EleutherAI/lm-evaluation-harness). Below are the results for two different groupsizes.
26+
27+
|Llama 2 | Baseline (FP32) | Groupwise 4-bit (128) | Groupwise 4-bit (256)
28+
|--------|-----------------| ---------------------- | ---------------
29+
|Uncyclotext Perplexity | 9.16 | 10.2 | 10.7
30+
31+
Note that groupsize less than 128 was not enabled, since such model were still too large. This is because our current efforts have focused on enabling FP32 and support for FP16 is under way. What this implies for model size is that 1) embedding table is in FP32 and 2) quantized weights scales are FP32.
32+
33+
## Performance
34+
35+
Performance was measured on Samsung Galaxy S22, S23, S24 and One Plus 12. Measurement performance is in terms of tokens/second.
36+
37+
|Device | Groupwise 4-bit (128) | Groupwise 4-bit (256)
38+
|--------| ---------------------- | ---------------
39+
|Galaxy S22 | x | x |
40+
|Galaxy S24 | x | x |
41+
|One plus 12 | x | x |
42+
|iPhone 15 pro | x | x |
43+
2144

2245
# Instructions
2346

0 commit comments

Comments
 (0)