Skip to content

Commit 06beace

Browse files
iseeyuanfacebook-github-bot
authored andcommitted
Update README.md on the evaluation parameters (#3139)
Summary: It's not clear how we got the perplexity numbers. Add the parameters we used to get those numbers. Pull Request resolved: #3139 Reviewed By: lucylq Differential Revision: D56319905 Pulled By: iseeyuan fbshipit-source-id: dc387cc84c2fe7a21e44642ff591000fd6728abb
1 parent 1eed125 commit 06beace

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

examples/models/llama2/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ For Llama3, we can use the same process. Note that it's only supported in the Ex
2424
## Quantization:
2525
We employed 4-bit groupwise per token dynamic quantization of all the linear layers of the model. Dynamic quantization refers to quantizating activations dynamically, such that quantization parameters for activations are calculated, from min/max range, at runtime. Here we quantized activations with 8bits (signed integer). Furthermore, weights are statically quantized. In our case weights were per-channel groupwise quantized with 4bit signed integer. For more information refer to this [page](https://github.com/pytorch-labs/ao/).
2626

27-
We evaluated UncycloText perplexity using [LM Eval](https://github.com/EleutherAI/lm-evaluation-harness). Below are the results for two different groupsizes.
27+
We evaluated UncycloText perplexity using [LM Eval](https://github.com/EleutherAI/lm-evaluation-harness). Below are the results for two different groupsizes, with max_seq_len 2048, and 1000 samples:
2828

2929
|Llama 2 | Baseline (FP32) | Groupwise 4-bit (128) | Groupwise 4-bit (256)
3030
|--------|-----------------| ---------------------- | ---------------

0 commit comments

Comments
 (0)