Skip to content

Commit 7d72c5a

Browse files
committed
Clean up additional doc placeholders and fix broken links
1 parent e450f84 commit 7d72c5a

17 files changed

+67
-42
lines changed

docs/source/backends-arm-ethos-u.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,8 @@
77
:::{grid-item-card} Tutorials we recommend you complete before this:
88
:class-card: card-prerequisites
99
* [Introduction to ExecuTorch](./intro-how-it-works.md)
10-
* [Setting up ExecuTorch](./getting-started-setup.md)
11-
* [Building ExecuTorch with CMake](./runtime-build-and-cross-compilation.md)
10+
* [Getting Started](./getting-started.md)
11+
* [Building ExecuTorch with CMake](./using-executorch-building-from-source.md)
1212
:::
1313

1414
:::{grid-item-card} What you will learn in this tutorial:
@@ -280,7 +280,7 @@ The `generate_pte_file` function in `run.sh` script produces the `.pte` files ba
280280

281281
ExecuTorch's CMake build system produces a set of build pieces which are critical for us to include and run the ExecuTorch runtime with-in the bare-metal environment we have for Corstone FVPs from Ethos-U SDK.
282282

283-
[This](./runtime-build-and-cross-compilation.md) document provides a detailed overview of each individual build piece. For running either variant of the `.pte` file, we will need a core set of libraries. Here is a list,
283+
[This](./using-executorch-building-from-source.md) document provides a detailed overview of each individual build piece. For running either variant of the `.pte` file, we will need a core set of libraries. Here is a list,
284284

285285
- `libexecutorch.a`
286286
- `libportable_kernels.a`

docs/source/backends-cadence.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -17,9 +17,9 @@ On top of being able to run on the Xtensa HiFi4 DSP, another goal of this tutori
1717
:::
1818
:::{grid-item-card} Tutorials we recommend you complete before this:
1919
:class-card: card-prerequisites
20-
* [Introduction to ExecuTorch](intro-how-it-works.md)
21-
* [Setting up ExecuTorch](getting-started-setup.md)
22-
* [Building ExecuTorch with CMake](runtime-build-and-cross-compilation.md)
20+
* [Introduction to ExecuTorch](./intro-how-it-works.md)
21+
* [Getting Started](./getting-started.md)
22+
* [Building ExecuTorch with CMake](./using-executorch-building-from-source.md)
2323
:::
2424
::::
2525

docs/source/backends-coreml.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,9 +11,9 @@ Core ML delegate uses Core ML APIs to enable running neural networks via Apple's
1111
:::
1212
:::{grid-item-card} Tutorials we recommend you complete before this:
1313
:class-card: card-prerequisites
14-
* [Introduction to ExecuTorch](intro-how-it-works.md)
15-
* [Setting up ExecuTorch](getting-started-setup.md)
16-
* [Building ExecuTorch with CMake](runtime-build-and-cross-compilation.md)
14+
* [Introduction to ExecuTorch](./intro-how-it-works.md)
15+
* [Getting Started](./getting-started.md)
16+
* [Building ExecuTorch with CMake](./using-executorch-building-from-source.md)
1717
* [ExecuTorch iOS Demo App](demo-apps-ios.md)
1818
:::
1919
::::

docs/source/backends-mediatek.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -11,9 +11,9 @@ MediaTek backend empowers ExecuTorch to speed up PyTorch models on edge devices
1111
:::
1212
:::{grid-item-card} Tutorials we recommend you complete before this:
1313
:class-card: card-prerequisites
14-
* [Introduction to ExecuTorch](intro-how-it-works.md)
15-
* [Setting up ExecuTorch](getting-started-setup.md)
16-
* [Building ExecuTorch with CMake](runtime-build-and-cross-compilation.md)
14+
* [Introduction to ExecuTorch](./intro-how-it-works.md)
15+
* [Getting Started](./getting-started.md)
16+
* [Building ExecuTorch with CMake](./using-executorch-building-from-source.md)
1717
:::
1818
::::
1919

@@ -91,4 +91,4 @@ cd executorch
9191

9292
```bash
9393
export LD_LIBRARY_PATH=<path_to_usdk>:<path_to_neuron_backend>:$LD_LIBRARY_PATH
94-
```
94+
```

docs/source/backends-mps.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,9 +12,9 @@ The MPS backend device maps machine learning computational graphs and primitives
1212
:::
1313
:::{grid-item-card} Tutorials we recommend you complete before this:
1414
:class-card: card-prerequisites
15-
* [Introduction to ExecuTorch](intro-how-it-works.md)
16-
* [Setting up ExecuTorch](getting-started-setup.md)
17-
* [Building ExecuTorch with CMake](runtime-build-and-cross-compilation.md)
15+
* [Introduction to ExecuTorch](./intro-how-it-works.md)
16+
* [Getting Started](./getting-started.md)
17+
* [Building ExecuTorch with CMake](./using-executorch-building-from-source.md)
1818
* [ExecuTorch iOS Demo App](demo-apps-ios.md)
1919
* [ExecuTorch iOS LLaMA Demo App](llm/llama-demo-ios.md)
2020
:::

docs/source/backends-overview.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ As part of the .pte file creation process, ExecuTorch identifies portions of the
88

99
### Available Backends
1010

11-
Commonly used hardware backends are listed below. For mobile, consider using XNNPACK for Android and XNNPACK or Core ML for iOS. To create a .pte file for a specific backend, pass the appropriate partitioner class to `to_edge_transform_and_lower`. See the appropriate backend documentation and the [Export and Lowering](#export-and-lowering) section below for more information.
11+
Commonly used hardware backends are listed below. For mobile, consider using XNNPACK for Android and XNNPACK or Core ML for iOS. To create a .pte file for a specific backend, pass the appropriate partitioner class to `to_edge_transform_and_lower`. See the appropriate backend documentation for more information.
1212

1313
- [XNNPACK (Mobile CPU)](backends-xnnpack.md)
1414
- [Core ML (iOS)](backends-coreml.md)
@@ -17,4 +17,4 @@ Commonly used hardware backends are listed below. For mobile, consider using XNN
1717
- [Qualcomm NPU](backends-qualcomm.md)
1818
- [MediaTek NPU](backends-mediatek.md)
1919
- [Arm Ethos-U NPU](backends-arm-ethos-u.md)
20-
- [Cadence DSP](backends-cadence.md)
20+
- [Cadence DSP](backends-cadence.md)

docs/source/backends-qualcomm.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,9 @@ Qualcomm AI Engine Direct is also referred to as QNN in the source and documenta
1414
:::
1515
:::{grid-item-card} Tutorials we recommend you complete before this:
1616
:class-card: card-prerequisites
17-
* [Introduction to ExecuTorch](intro-how-it-works.md)
18-
* [Setting up ExecuTorch](getting-started-setup.md)
19-
* [Building ExecuTorch with CMake](runtime-build-and-cross-compilation.md)
17+
* [Introduction to ExecuTorch](./intro-how-it-works.md)
18+
* [Getting Started](./getting-started.md)
19+
* [Building ExecuTorch with CMake](./using-executorch-building-from-source.md)
2020
:::
2121
::::
2222

docs/source/getting-started.md

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,9 @@ The following are required to install the ExecuTorch host libraries, needed to e
2929
<hr/>
3030

3131
## Preparing the Model
32-
Exporting is the process of taking a PyTorch model and converting it to the .pte file format used by the ExecuTorch runtime. This is done using Python APIs. PTE files for common models, such as Llama 3, can be found on HuggingFace under [ExecuTorch Community](https://huggingface.co/executorch-community). These models have been exported and lowered for ExecuTorch, and can be directly deployed without needing to go through the lowering process.
32+
Exporting is the process of taking a PyTorch model and converting it to the .pte file format used by the ExecuTorch runtime. This is done using Python APIs. PTE files for common models, such as Llama 3.2, can be found on HuggingFace under [ExecuTorch Community](https://huggingface.co/executorch-community). These models have been exported and lowered for ExecuTorch, and can be directly deployed without needing to go through the lowering process.
33+
34+
A complete example of exporting, lowering, and verifying MobileNet V2 is available as a [Colab notebook](https://colab.research.google.com/drive/1qpxrXC3YdJQzly3mRg-4ayYiOjC6rue3?usp=sharing).
3335

3436
### Requirements
3537
- A PyTorch model.
@@ -39,7 +41,7 @@ Exporting is the process of taking a PyTorch model and converting it to the .pte
3941
### Selecting a Backend
4042
ExecuTorch provides hardware acceleration for a wide variety of hardware. The most commonly used backends are XNNPACK, for Arm and x86 CPU, Core ML (for iOS), Vulkan (for Android GPUs), and Qualcomm (for Qualcomm-powered Android phones).
4143

42-
For mobile use cases, consider using XNNPACK for Android and Core ML or XNNPACK for iOS as a first step. See [Hardware Backends](using-executorch-export.md#hardware-backends) for more information.
44+
For mobile use cases, consider using XNNPACK for Android and Core ML or XNNPACK for iOS as a first step. See [Hardware Backends](backends-overview.md) for more information.
4345

4446
### Exporting
4547
Exporting is done using Python APIs. ExecuTorch provides a high degree of customization during the export process, but the typical flow is as follows:
@@ -50,13 +52,13 @@ model = MyModel() # The PyTorch model to export
5052
example_inputs = (torch.randn(1,3,64,64),) # A tuple of inputs
5153

5254
et_program =
53-
executorch.exir.to_edge_transform_and_lower(
54-
torch.export.export(model, example_inputs)
55+
executorch.exir.to_edge_transform_and_lower(
56+
torch.export.export(model, example_inputs)
5557
partitioner=[XnnpackPartitioner()]
5658
).to_executorch()
5759

5860
with open(“model.pte”, “wb”) as f:
59-
f.write(et_program.buffer)
61+
f.write(et_program.buffer)
6062
```
6163

6264
If the model requires varying input sizes, you will need to specify the varying dimensions and bounds as part of the `export` call. See [Model Export and Lowering](using-executorch-export.md) for more information.
@@ -119,7 +121,7 @@ import org.pytorch.executorch.Tensor;
119121

120122
//
121123

122-
Module model = Module.load(“/path/to/model.pte”)
124+
Module model = Module.load(“/path/to/model.pte”);
123125

124126
Tensor input_tensor = Tensor.fromBlob(float_data, new long[] { 1, 3, height, width });
125127
EValue input_evalue = EValue.from(input_tensor);
@@ -204,4 +206,4 @@ ExecuTorch provides a high-degree of customizability to support diverse hardware
204206
- [Using ExecuTorch with C++](using-executorch-cpp.md) for embedded and mobile native development.
205207
- [Profiling and Debugging](using-executorch-troubleshooting.md) for developer tooling and debugging.
206208
- [API Reference](export-to-executorch-api-reference.md) for a full description of available APIs.
207-
- [Examples](https://github.com/pytorch/executorch/tree/main/examples) for demo apps and example code.
209+
- [Examples](https://github.com/pytorch/executorch/tree/main/examples) for demo apps and example code.

docs/source/index.rst

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -91,8 +91,8 @@ Topics in this section will help you get started with ExecuTorch.
9191
using-executorch-cpp
9292
using-executorch-runtime-integration
9393
using-executorch-troubleshooting
94-
using-executorch-faqs
9594
using-executorch-building-from-source
95+
using-executorch-faqs
9696

9797
.. toctree::
9898
:glob:
@@ -140,6 +140,9 @@ Topics in this section will help you get started with ExecuTorch.
140140
:hidden:
141141

142142
runtime-overview
143+
extension-module
144+
extension-tensor
145+
running-a-model-cpp-tutorial
143146
runtime-backend-delegate-implementation-and-linking
144147
runtime-platform-abstraction-layer
145148
portable-cpp-programming

docs/source/llm/build-run-llama3-qualcomm-ai-engine-direct-backend.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ This tutorial demonstrates how to export Llama 3 8B Instruct for Qualcomm AI Eng
55
## Prerequisites
66

77
- Set up your ExecuTorch repo and environment if you haven’t done so by following [the Setting up ExecuTorch](../getting-started-setup.md) to set up the repo and dev environment.
8-
- Read [the Building and Running ExecuTorch with Qualcomm AI Engine Direct Backend page](../build-run-qualcomm-ai-engine-direct-backend.md) to understand how to export and run a model with Qualcomm AI Engine Direct Backend on Qualcomm device.
8+
- Read [the Building and Running ExecuTorch with Qualcomm AI Engine Direct Backend page](../backends-qualcomm.md) to understand how to export and run a model with Qualcomm AI Engine Direct Backend on Qualcomm device.
99
- Follow [the README for executorch llama](https://github.com/pytorch/executorch/tree/main/examples/models/llama) to know how to run a llama model on mobile via ExecuTorch.
1010
- A Qualcomm device with 16GB RAM
1111
- We are continuing to optimize our memory usage to ensure compatibility with lower memory devices.

docs/source/llm/getting-started.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -592,8 +592,8 @@ I'm not sure if you've heard of the "Curse of the Dragon" or not, but it's a ver
592592
The delegated model should be noticeably faster compared to the non-delegated model.
593593

594594
For more information regarding backend delegateion, see the ExecuTorch guides
595-
for the [XNNPACK Backend](../tutorial-xnnpack-delegate-lowering.md), [Core ML
596-
Backend](../build-run-coreml.md) and [Qualcomm AI Engine Direct Backend](build-run-llama3-qualcomm-ai-engine-direct-backend.md).
595+
for the [XNNPACK Backend](../backends-xnnpack.md), [Core ML
596+
Backend](../backends-coreml.md) and [Qualcomm AI Engine Direct Backend](build-run-llama3-qualcomm-ai-engine-direct-backend.md).
597597

598598
## Quantization
599599

docs/source/runtime-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -157,7 +157,7 @@ For more details about the ExecuTorch runtime, please see:
157157

158158
* [Detailed Runtime APIs Tutorial](running-a-model-cpp-tutorial.md)
159159
* [Simplified Runtime APIs Tutorial](extension-module.md)
160-
* [Runtime Build and Cross Compilation](runtime-build-and-cross-compilation.md)
160+
* [Building from Source](using-executorch-building-from-source.md)
161161
* [Runtime Platform Abstraction Layer](runtime-platform-abstraction-layer.md)
162162
* [Runtime Profiling](runtime-profiling.md)
163163
* [Backends and Delegates](compiler-delegate-and-partitioner.md)

docs/source/tutorial-xnnpack-delegate-lowering.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ In this tutorial, you will learn how to export an XNNPACK lowered Model and run
1212
:class-card: card-prerequisites
1313
* [Setting up ExecuTorch](./getting-started-setup.md)
1414
* [Model Lowering Tutorial](./tutorials/export-to-executorch-tutorial)
15-
* [ExecuTorch XNNPACK Delegate](./native-delegates-executorch-xnnpack-delegate.md)
15+
* [ExecuTorch XNNPACK Delegate](./backends-xnnpack.md)
1616
:::
1717
::::
1818

docs/source/using-executorch-building-from-source.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,7 @@ xcode-select --install
159159
```
160160

161161
Run the above command with `--help` flag to learn more on how to build additional backends
162-
(like [Core ML](build-run-coreml.md), [MPS](build-run-mps.md) or XNNPACK), etc.
162+
(like [Core ML](backends-coreml.md), [MPS](backends-mps.md) or XNNPACK), etc.
163163
Note, some backends may require additional dependencies and certain versions of Xcode and iOS.
164164

165165
3. Copy over the generated `.xcframework` bundles to your Xcode project, link them against
@@ -172,6 +172,6 @@ Check out the [iOS Demo App](demo-apps-ios.md) tutorial for more info.
172172

173173
You have successfully cross-compiled `executor_runner` binary to iOS and Android platforms. You can start exploring advanced features and capabilities. Here is a list of sections you might want to read next:
174174

175-
* [Selective build](./kernel-library-selective_build) to build the runtime that links to only kernels used by the program, which can provide significant binary size savings.
175+
* [Selective build](kernel-library-selective-build.md) to build the runtime that links to only kernels used by the program, which can provide significant binary size savings.
176176
* Tutorials on building [Android](./demo-apps-android.md) and [iOS](./demo-apps-ios.md) demo apps.
177-
* Tutorials on deploying applications to embedded devices such as [ARM Cortex-M/Ethos-U](./executorch-arm-delegate-tutorial.md) and [XTensa HiFi DSP](./build-run-xtensa.md).
177+
* Tutorials on deploying applications to embedded devices such as [ARM Cortex-M/Ethos-U](backends-arm-ethos-u.md) and [XTensa HiFi DSP](./backends-cadence.md).

docs/source/using-executorch-cpp.md

Lines changed: 21 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,9 +36,29 @@ For more information on the Module class, see [Running an ExecuTorch Model Using
3636
3737
Running a model using the low-level runtime APIs allows for a high-degree of control over memory allocation, placement, and loading. This allows for advanced use cases, such as placing allocations in specific memory banks or loading a model without a file system. For an end to end example using the low-level runtime APIs, see [Running an ExecuTorch Model in C++ Tutorial](running-a-model-cpp-tutorial.md).
3838
39+
## Building with C++
40+
41+
ExecuTorch uses CMake as the primary build system. Inclusion of the module and tensor APIs are controlled by the `EXECUTORCH_BUILD_EXTENSION_MODULE` and `EXECUTORCH_BUILD_EXTENSION_TENSOR` CMake options. As these APIs may not be supported on embedded systems, they are disabled by default when building from source. The low-level API surface is always included. To link, add the `executorch` target as a CMake dependency, along with `executorch_module_static` and `executorch_tensor`, if desired.
42+
43+
```
44+
# CMakeLists.txt
45+
add_subdirectory("executorch")
46+
...
47+
target_link_libraries(
48+
my_target
49+
PRIVATE executorch
50+
executorch_module_static
51+
executorch_tensor
52+
optimized_native_cpu_ops_lib
53+
xnnpack_backend)
54+
```
55+
56+
See [Building from Source](using-executorch-building-from-source.md) for more information on the CMake build process.
57+
3958
## Next Steps
4059
4160
- [Runtime API Reference](executorch-runtime-api-reference.md) for documentation on the available C++ runtime APIs.
4261
- [Running an ExecuTorch Model Using the Module Extension in C++](extension-module.md) for information on the high-level Module API.
4362
- [Managing Tensor Memory in C++](extension-tensor.md) for information on high-level tensor APIs.
44-
- [Running an ExecuTorch Model in C++ Tutorial](running-a-model-cpp-tutorial.md) for information on the low-level runtime APIs.
63+
- [Running an ExecuTorch Model in C++ Tutorial](running-a-model-cpp-tutorial.md) for information on the low-level runtime APIs.
64+
- [Building from Source](using-executorch-building-from-source.md) for information on CMake build integration.

docs/source/using-executorch-runtime-integration.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ Logging is sent to STDOUT and STDERR by default on host platforms, and is redire
1010

1111
To configure log level when building from source, specify `EXECUTORCH_ENABLE_LOGGING` as on or off and `EXECUTORCH_LOG_LEVEL` as one of debug, info, error, or fatal. Logging is enabled by default in debug builds and disabled in release. Log level defaults to info.
1212

13-
See [Building from Source](TODO) for more information.
13+
See [Building from Source](using-executorch-building-from-source.md) for more information.
1414

1515
```
1616
cmake -b cmake-out -DEXECUTORCH_ENABLE_LOGGING=ON -DEXECUTORCH_LOG_LEVEL=DEBUG ...
@@ -50,4 +50,4 @@ The choice of kernel library is transparent to the user when using mobile pre-bu
5050

5151
By default, ExecuTorch ships with all supported operator kernels, allowing it to run any supported model at any precision. This comes with a binary size of several megabytes, which may be undesirable for production use cases or resource constrained systems. To minimize binary size, ExecuTorch provides selective build functionality, in order to include only the operators needed to run specific models.
5252

53-
Note the selective build only applies to the portable and optimized kernel libraries. Delegates do not participate in selective build and can be included or excluded by linking indivually. See [Kernel Library Selective Build](kernel-library-selective-build.md) for more information.
53+
Note the selective build only applies to the portable and optimized kernel libraries. Delegates do not participate in selective build and can be included or excluded by linking indivually. See [Kernel Library Selective Build](kernel-library-selective-build.md) for more information.

docs/source/using-executorch-troubleshooting.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Profiling and Debugging
22

3-
ExecuTorch
3+
To faciliate model and runtime integration, ExecuTorch provides tools to profile model resource utilization, numerics, and more. This section describes the available troubleshooting tools and steps to resolve issues when integrating ExecuTorch.
44

55
## General Troubleshooting Steps
66

@@ -17,4 +17,4 @@ The ExecuTorch developer tools, or devtools, are a collection of tooling for tro
1717
- [Frequently Asked Questions](using-executorch-faqs.md) for solutions to commonly encountered questions and issues.
1818
- [Introduction to the ExecuTorch Developer Tools](runtime-profiling.md) for a high-level introduction to available developer tooling.
1919
- [Using the ExecuTorch Developer Tools to Profile a Model](tutorials/devtools-integration-tutorial.md) for information on runtime performance profiling.
20-
- [Inspector APIs](runtime-profiling.md) for reference material on trace inspector APIs.
20+
- [Inspector APIs](runtime-profiling.md) for reference material on trace inspector APIs.

0 commit comments

Comments
 (0)