You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Summary:
Pull Request resolved: #5573
This changes the default behavior. Helps prefill ~20%, hurts decode ~7%.
As a next step, I will try to debug more into perf regression on decode and if anything more we can get on prefill by tuning xnnpack thread dispatcher for gemm, gemv, mul, add, sigmoid, and sub.
**On my local (unreliable) S23** -
* Vanilla:
```
dm1q:/data/local/tmp/llama $ ./llama_main_release \
--model_path ./llama_gs32_vanilla.pte \
--tokenizer_path ./tokenizer.bin \
--seq_len=128 \
--prompt="${prompt}"
[...]
I 00:00:22.188618 executorch:stats.h:84] Prompt Tokens: 44 Generated Tokens: 83
I 00:00:22.188621 executorch:stats.h:90] Model Load Time: 12.922000 (seconds)
I 00:00:22.188624 executorch:stats.h:100] Total inference time: 9.252000 (seconds) Rate: 8.971033 (tokens/second)
I 00:00:22.188627 executorch:stats.h:108] Prompt evaluation: 1.740000 (seconds) Rate: 25.287356 (tokens/second)
I 00:00:22.188630 executorch:stats.h:119] Generated 83 tokens: 7.512000 (seconds) Rate: 11.048988 (tokens/second)
I 00:00:22.188632 executorch:stats.h:127] Time to first generated token: 1.740000 (seconds)
I 00:00:22.188634 executorch:stats.h:134] Sampling time over 127 tokens: 0.015000 (seconds)
[...]
```
* Two partition (2part)
```
dm1q:/data/local/tmp/llama $ ./llama_main_release \
--model_path ./llama_gs32_2part.pte \ # New PTE
--tokenizer_path ./tokenizer.bin \
--seq_len=128 \
--prompt="${prompt}"
[...]
I 00:00:22.205058 executorch:stats.h:84] Prompt Tokens: 44 Generated Tokens: 83
I 00:00:22.205061 executorch:stats.h:90] Model Load Time: 12.876000 (seconds)
I 00:00:22.205063 executorch:stats.h:100] Total inference time: 9.323000 (seconds) Rate: 8.902714 (tokens/second)
I 00:00:22.205067 executorch:stats.h:108] Prompt evaluation: 1.549000 (seconds) Rate: 28.405423 (tokens/second)
I 00:00:22.205070 executorch:stats.h:119] Generated 83 tokens: 7.774000 (seconds) Rate: 10.676614 (tokens/second)
I 00:00:22.205073 executorch:stats.h:127] Time to first generated token: 1.549000 (seconds)
I 00:00:22.205075 executorch:stats.h:134] Sampling time over 127 tokens: 0.029000 (seconds)
[...]
```
**Similar results on AiBench OnePlus12**,
* Vanilla, AiBench Links: [gs=32](https://www.internalfb.com/intern/aibench/details/114258284562772), [gs=256](https://www.internalfb.com/intern/aibench/details/438103192423336)
```
# gs=32
I 00:00:21.792659 executorch:stats.h:84] Prompt Tokens: 5 Generated Tokens: 118
I 00:00:21.792721 executorch:stats.h:90] Model Load Time: 11.666000 (seconds)
I 00:00:21.792754 executorch:stats.h:100] Total inference time: 10.109000 (seconds) Rate: 11.672767 (tokens/second)
I 00:00:21.792778 executorch:stats.h:108] Prompt evaluation: 0.365000 (seconds) Rate: 13.698630 (tokens/second)
I 00:00:21.792799 executorch:stats.h:119] Generated 118 tokens: 9.744000 (seconds) Rate: 12.110016 (tokens/second)
I 00:00:21.792818 executorch:stats.h:127] Time to first generated token: 0.365000 (seconds)
I 00:00:21.792837 executorch:stats.h:134] Sampling time over 123 tokens: 0.008000 (seconds)
```
* Two partition, AiBench Links: [gs=32](https://www.internalfb.com/intern/aibench/details/852029802754424), [gs=256](https://www.internalfb.com/intern/aibench/details/491722732991273)
```
# gs=32
I 00:00:22.584271 executorch:stats.h:84] Prompt Tokens: 5 Generated Tokens: 118
I 00:00:22.584336 executorch:stats.h:90] Model Load Time: 11.610000 (seconds)
I 00:00:22.584367 executorch:stats.h:100] Total inference time: 10.960000 (seconds) Rate: 10.766423 (tokens/second)
I 00:00:22.584389 executorch:stats.h:108] Prompt evaluation: 0.286000 (seconds) Rate: 17.482517 (tokens/second)
I 00:00:22.584409 executorch:stats.h:119] Generated 118 tokens: 10.674000 (seconds) Rate: 11.054900 (tokens/second)
I 00:00:22.584428 executorch:stats.h:127] Time to first generated token: 0.286000 (seconds)
I 00:00:22.584446 executorch:stats.h:134] Sampling time over 123 tokens: 0.013000 (seconds)
```
bypass-github-export-checks
bypass-github-pytorch-ci-checks
bypass-github-executorch-ci-checks
Reviewed By: mcr229
Differential Revision: D63271101
fbshipit-source-id: 1680c8d97ec5ac15011eae8a6d75a003bd0354b4
0 commit comments