You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: extension/benchmark/README.md
+20-30Lines changed: 20 additions & 30 deletions
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ The ExecuTorch project introduces an advanced benchmarking infrastructure design
8
8
9
9
-**Device Support**: Includes popular phones like latest Apple iPhone, Google Pixel, and Samsung Galaxy, etc.
10
10
11
-
-**Backend Delegates**: Supports XNNPACK, Apple CoreML, Qualcomm QNN, and more in the near future.
11
+
-**Backend Delegates**: Supports XNNPACK, Apple CoreML and MPS, Qualcomm QNN, and more in the near future.
12
12
13
13
-**Benchmark Apps:** Generic apps that support both GenAI and non-GenAI models, capable of measuring performance offline. [Android App](android/benchmark/) | [iOS App](apple/Benchmark/). Popular Android and iOS profilers with in-depth performance analysis will be integrated with these apps in the future.
14
14
@@ -17,35 +17,25 @@ The ExecuTorch project introduces an advanced benchmarking infrastructure design
17
17
> **Disclaimer:** The infrastructure is new and experimental. We're working on improving its accessibility and stability over time.
18
18
19
19
20
-
## Preliminary Benchmark Results
21
-
22
-
Below is a table summarizing some example data points obtained via the infra. These numbers represent model load time and average inference latency across different platforms and backends.
23
-
24
-
| Model | Backend | Model Load Time (ms) | Avg Inference Latency (ms) | Device |
| MobileBERT (mobilebert) | XNNPACK FP32 |[26.499](https://github.com/pytorch/executorch/actions/runs/11136241814/job/30999930558)|[33.978](https://github.com/pytorch/executorch/actions/runs/11136241814/job/30999930558)| Apple iPhone 15 Pro |
46
-
| MobileBERT (mobilebert) | COREML FP16 |[206.202](https://github.com/pytorch/executorch/actions/runs/11136241814/job/30999930398)|[1.873](https://github.com/pytorch/executorch/actions/runs/11136241814/job/30999930398)| Apple iPhone 15 Pro |
47
-
| EDSR (edsr) | XNNPACK Q8 |[3.190](https://github.com/pytorch/executorch/actions/runs/11136241814/job/30999929836)|[168.429](https://github.com/pytorch/executorch/actions/runs/11136241814/job/30999929836)| Apple iPhone 15 Pro |
48
-
| EDSR (edsr) | COREML FP16 |[156.075](https://github.com/pytorch/executorch/actions/runs/11136241814/job/30999929690)|[77.346](https://github.com/pytorch/executorch/actions/runs/11136241814/job/30999929690)| Apple iPhone 15 Pro |
20
+
## Dashboard
21
+
22
+
The ExecuTorch Benchmark Dashboard tracks performance metrics for various models across different backend delegates and devices. It enables users to compare metrics, monitor trends, and identify optimizations or regressions in Executorch. The dashboard is accessible at **[ExecuTorch Benchmark Dashboard](https://hud.pytorch.org/benchmark/llms?repoName=pytorch%2Fexecutorch)**.
23
+
24
+
**Comprehensive Comparisons**:
25
+
- Analyze performance differences between backend delegates (e.g., XNNPACK, CoreML, QNN, MPS) for the same model.
26
+
- Compare performance across different models.
27
+
- Track performance changes over time and across different commits.
28
+
29
+
**Metric Tracking**:
30
+
- Monitor essential metrics such as load time and inference time for different model implementations.
31
+
- For LLMs, additional metrics like tokens/s are available.
32
+
- Observe performance trends over time to identify improvements or regressions.
33
+
34
+
**Visualizations**:
35
+
- View detailed performance data through charts and graphs.
36
+
- Color-coded highlights for improvements (green) and regressions (red) exceeding 5% compared to the baseline.
37
+
38
+
The dashboard is refreshed nightly with new models, metrics, and features to provide the most current and relevant information for ExecuTorch performance benchmarking across various model types and configurations.
0 commit comments