Skip to content

Commit c44d8ef

Browse files
committed
Update base for Update on "qnn end to end flow"
Patch a few changes including: - support bool tensor type - support fp16 and fix the 8w8a quantization. - add two non-supported ops (slice_scatter and index_put) in common_defs.py stories model working end to end: AOT: fp16: ``` python -m examples.models.llama2.export_llama -kv --qnn -c stories110M.pt -p params.json ``` quantize: ``` python -m examples.models.llama2.export_llama -kv --qnn --pt2e_quantize -c stories110M.pt -p params.json ``` Runtime: ``` /llama_main --model_path=llama2_fp16_qnn_2.21.pte --tokenizer_path=tokenizer.bin --prompt="Once" ``` Output: ``` Once upon a time, there was a boy named Tim. Tim had a pet dog named Max. Max was a big, strong dog. They liked to play and run in the park. One day, Tim and Max went to the park to play. They saw a cat. The cat was up in a tree. Max wanted to help the cat. He tried to climb the tree, but he could not. Then, something unexpected happened. Max started to climb the tree! He was very strong. Max helped the cat come down. The cat was happy. Tim was so proud of his pet. ``` Stories model is too small and sensitive to qunatization. Differential Revision: [D56119738](https://our.internmc.facebook.com/intern/diff/D56119738/) [ghstack-poisoned]
2 parents d1dfcd4 + cf78107 commit c44d8ef

File tree

3 files changed

+9
-7
lines changed

3 files changed

+9
-7
lines changed

examples/models/llama2/README.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -24,11 +24,12 @@ For Llama3, we can use the same process. Note that it's only supported in the Ex
2424
## Quantization:
2525
We employed 4-bit groupwise per token dynamic quantization of all the linear layers of the model. Dynamic quantization refers to quantizating activations dynamically, such that quantization parameters for activations are calculated, from min/max range, at runtime. Here we quantized activations with 8bits (signed integer). Furthermore, weights are statically quantized. In our case weights were per-channel groupwise quantized with 4bit signed integer. For more information refer to this [page](https://github.com/pytorch-labs/ao/).
2626

27-
We evaluated UncycloText perplexity using [LM Eval](https://github.com/EleutherAI/lm-evaluation-harness). Below are the results for two different groupsizes.
27+
We evaluated UncycloText perplexity using [LM Eval](https://github.com/EleutherAI/lm-evaluation-harness). Below are the results for two different groupsizes, with max_seq_len 2048, and 1000 samples.
2828

29-
|Llama 2 | Baseline (FP32) | Groupwise 4-bit (128) | Groupwise 4-bit (256)
29+
|Model | Baseline (FP32) | Groupwise 4-bit (128) | Groupwise 4-bit (256)
3030
|--------|-----------------| ---------------------- | ---------------
31-
|Uncyclotext Perplexity | 9.16 | 10.2 | 10.7
31+
|Llama 2 7B | 9.2 | 10.2 | 10.7
32+
|Llama 3 8B | 7.9 | 9.4 | 9.7
3233

3334
Note that groupsize less than 128 was not enabled, since such model were still too large. This is because our current efforts have focused on enabling FP32 and support for FP16 is under way. What this implies for model size is that 1) embedding table is in FP32 and 2) quantized weights scales are FP32.
3435

examples/models/llama2/eval_llama_lib.py

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -42,12 +42,11 @@ def __init__(
4242
tokenizer: Union[SentencePieceTokenizer, Tiktoken],
4343
max_seq_length: Optional[int] = None,
4444
):
45-
super().__init__()
45+
device = "cuda" if torch.cuda.is_available() else "cpu"
46+
super().__init__(device=device)
4647
self._model = model
4748
self._tokenizer = tokenizer
48-
self._device = (
49-
torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
50-
)
49+
self._device = torch.device(device)
5150
self._max_seq_length = 2048 if max_seq_length is None else max_seq_length
5251

5352
@property

examples/models/llama3/README.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
# Summary
2+
For Llama3, use the same example code, minus tokenizer, as Llama2. Please see the ../llama2/README.md for details.

0 commit comments

Comments
 (0)