Skip to content

Commit a6f754a

Browse files
helunwencserfacebook-github-bot
authored andcommitted
Add clarification about perpelexity discrepancy (#6053)
Summary: Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * __->__ #6053 Pull Request resolved: #6053 Reviewed By: mergennachin Differential Revision: D64120330 Pulled By: helunwencser fbshipit-source-id: 61ccaf98c4a870863ceb0b3de32e057624bfa7bd
1 parent b8f66f9 commit a6f754a

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

examples/models/llama2/README.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,9 @@ Additionally, 1B/3B models are sensitive to accuracy loss when regular PTQ quant
4747
## Quantization:
4848
We employed 4-bit groupwise per token dynamic quantization of all the linear layers of the model. Dynamic quantization refers to quantizating activations dynamically, such that quantization parameters for activations are calculated, from min/max range, at runtime. Here we quantized activations with 8bits (signed integer). Furthermore, weights are statically quantized. In our case weights were per-channel groupwise quantized with 4bit signed integer. For more information refer to this [page](https://github.com/pytorch/ao).
4949

50-
We evaluated UncycloText perplexity using [LM Eval](https://github.com/EleutherAI/lm-evaluation-harness). Below are the results for two different groupsizes, with max_seq_len 2048, and 1000 samples.
50+
We evaluated UncycloText perplexity using [LM Eval](https://github.com/EleutherAI/lm-evaluation-harness). Please note that LM Eval reports perplexity normalized by word count instead of token count. You may see different perplexity for UncycloText from other sources if they implement it differntly. More details could be found [here](https://github.com/EleutherAI/lm-evaluation-harness/issues/2301).
51+
52+
Below are the results for two different groupsizes, with max_seq_len 2048, and 1000 samples.
5153

5254
|Model | Baseline (FP32) | Groupwise 4-bit (128) | Groupwise 4-bit (256)
5355
|--------|-----------------| ---------------------- | ---------------

0 commit comments

Comments
 (0)