Skip to content

Commit 60cc6bc

Browse files
New URL for the Profiling page (#5966)
New URL for the Profiling page (#5819) Summary: Pull Request resolved: #5819 This diff is to rename the "sdk-profiling" documentation page to just "profiling". Old URL: https://pytorch.org/executorch/main/sdk-profiling.html New URL: https://pytorch.org/executorch/main/profiling.html Design doc: https://docs.google.com/document/d/1l6DYTq9Kq6VrPohruRFP-qScZDj01W_g4zlKyvqKGF4/edit?usp=sharing Reviewed By: dbort Differential Revision: D63771297 fbshipit-source-id: 452fd105d9beca35242a2d60a9869b4ebbc54df1 (cherry picked from commit 79b7896) Co-authored-by: Olivia Liu <[email protected]>
1 parent 9619b9c commit 60cc6bc

File tree

4 files changed

+27
-24
lines changed

4 files changed

+27
-24
lines changed

docs/source/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -204,7 +204,7 @@ Topics in this section will help you get started with ExecuTorch.
204204
bundled-io
205205
etrecord
206206
etdump
207-
sdk-profiling
207+
runtime-profiling
208208
model-debugging
209209
model-inspector
210210
memory-planning-inspection

docs/source/runtime-overview.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ The runtime is also responsible for:
3333
semantics of those operators.
3434
* Dispatching predetermined sections of the model to [backend
3535
delegates](compiler-delegate-and-partitioner.md) for acceleration.
36-
* Optionally gathering [profiling data](sdk-profiling.md) during load and
36+
* Optionally gathering [profiling data](runtime-profiling.md) during load and
3737
execution.
3838

3939
## Design Goals
@@ -159,7 +159,7 @@ For more details about the ExecuTorch runtime, please see:
159159
* [Simplified Runtime APIs Tutorial](extension-module.md)
160160
* [Runtime Build and Cross Compilation](runtime-build-and-cross-compilation.md)
161161
* [Runtime Platform Abstraction Layer](runtime-platform-abstraction-layer.md)
162-
* [Runtime Profiling](sdk-profiling.md)
162+
* [Runtime Profiling](runtime-profiling.md)
163163
* [Backends and Delegates](compiler-delegate-and-partitioner.md)
164164
* [Backend Delegate Implementation](runtime-backend-delegate-implementation-and-linking.md)
165165
* [Kernel Library Overview](kernel-library-overview.md)

docs/source/runtime-profiling.md

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
# Profiling Models in ExecuTorch
2+
3+
Profiling in ExecuTorch gives users access to these runtime metrics:
4+
- Model Load Time.
5+
- Operator Level Execution Time.
6+
- Delegate Execution Time.
7+
- If the delegate that the user is calling into has been integrated with the [Developer Tools](./delegate-debugging.md), then users will also be able to access delegated operator execution time.
8+
- End-to-end Inference Execution Time.
9+
10+
One uniqe aspect of ExecuTorch Profiling is the ability to link every runtime executed operator back to the exact line of python code from which this operator originated. This capability enables users to easily identify hotspots in their model, source them back to the exact line of Python code, and optimize if chosen to.
11+
12+
We provide access to all the profiling data via the Python [Inspector API](./model-inspector.rst). The data mentioned above can be accessed through these interfaces, allowing users to perform any post-run analysis of their choice.
13+
14+
## Steps to Profile a Model in ExecuTorch
15+
16+
1. [Optional] Generate an [ETRecord](./etrecord.rst) while you're exporting your model. If provided this will enable users to link back profiling details to eager model source code (with stack traces and module hierarchy).
17+
2. Build the runtime with the pre-processor flags that enable profiling. Detailed in the [ETDump documentation](./etdump.md).
18+
3. Run your Program on the ExecuTorch runtime and generate an [ETDump](./etdump.md).
19+
4. Create an instance of the [Inspector API](./model-inspector.rst) by passing in the ETDump you have sourced from the runtime along with the optionally generated ETRecord from step 1.
20+
- Through the Inspector API, users can do a wide range of analysis varying from printing out performance details to doing more finer granular calculation on module level.
21+
22+
23+
Please refer to the [Developer Tools tutorial](./tutorials/devtools-integration-tutorial.rst) for a step-by-step walkthrough of the above process on a sample model.

docs/source/sdk-profiling.md

Lines changed: 1 addition & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,3 @@
11
# Profiling Models in ExecuTorch
22

3-
Profiling in ExecuTorch gives users access to these runtime metrics:
4-
- Model Load Time.
5-
- Operator Level Execution Time.
6-
- Delegate Execution Time.
7-
- If the delegate that the user is calling into has been integrated with the [Developer Tools](./delegate-debugging.md), then users will also be able to access delegated operator execution time.
8-
- End-to-end Inference Execution Time.
9-
10-
One uniqe aspect of ExecuTorch Profiling is the ability to link every runtime executed operator back to the exact line of python code from which this operator originated. This capability enables users to easily identify hotspots in their model, source them back to the exact line of Python code, and optimize if chosen to.
11-
12-
We provide access to all the profiling data via the Python [Inspector API](./model-inspector.rst). The data mentioned above can be accessed through these interfaces, allowing users to perform any post-run analysis of their choice.
13-
14-
## Steps to Profile a Model in ExecuTorch
15-
16-
1. [Optional] Generate an [ETRecord](./etrecord.rst) while you're exporting your model. If provided this will enable users to link back profiling details to eager model source code (with stack traces and module hierarchy).
17-
2. Build the runtime with the pre-processor flags that enable profiling. Detailed in the [ETDump documentation](./etdump.md).
18-
3. Run your Program on the ExecuTorch runtime and generate an [ETDump](./etdump.md).
19-
4. Create an instance of the [Inspector API](./model-inspector.rst) by passing in the ETDump you have sourced from the runtime along with the optionally generated ETRecord from step 1.
20-
- Through the Inspector API, users can do a wide range of analysis varying from printing out performance details to doing more finer granular calculation on module level.
21-
22-
23-
Please refer to the [Developer Tools tutorial](./tutorials/devtools-integration-tutorial.rst) for a step-by-step walkthrough of the above process on a sample model.
3+
Please update your link to <https://pytorch.org/executorch/main/runtime-profiling.html>. This URL will be deleted after v0.4.0.

0 commit comments

Comments
 (0)