Skip to content

examples nits #820

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion backends/qualcomm/setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ on a Android device.

## Prerequisite

Please finish tutorial [Setting up executorch](../../docs/website/docs/tutorials/00_setting_up_executorch.md).
Please finish tutorial [Setting up executorch](../../docs/source/getting-started-setup.md).


## Conventions
Expand Down
2 changes: 1 addition & 1 deletion examples/arm/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
## ExecuTorch on ARM Cortex-M55 + Ethos-U55

This dir contains scripts to help you prepare setup needed to run a PyTorch
model on a ARM Corstone-300 platform via ExecuTorch. Corstone-300 platform
model on an ARM Corstone-300 platform via ExecuTorch. Corstone-300 platform
contains the Cortex-M55 CPU and Ethos-U55 NPU.

We will start from a PyTorch model in python, export it, convert it to a `.pte`
Expand Down
4 changes: 2 additions & 2 deletions examples/portable/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,11 +16,11 @@ examples/portable

## Using portable mode

We will walk through an example model to generate a `.pte` file in [portable mode](/docs/website/docs/basics/terminology.md) from a python `torch.nn.module`
We will walk through an example model to generate a `.pte` file in [portable mode](../../docs/source/concepts.md#portable-mode-lean-mode) from a python `torch.nn.module`
from the [`models/`](../models) directory using scripts in the `portable/scripts` directory. Then we will run on the `.pte` model on the ExecuTorch runtime. For that we will use `executor_runner`.


1. Following the setup guide in [Setting up ExecuTorch from GitHub](/docs/website/docs/tutorials/00_setting_up_executorch.md)
1. Following the setup guide in [Setting up ExecuTorch from GitHub](../../docs/source/getting-started-setup.md)
you should be able to get the basic development environment for ExecuTorch working.

2. Using the script `portable/scripts/export.py` generate a model binary file by selecting a
Expand Down
9 changes: 5 additions & 4 deletions examples/portable/custom_ops/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,13 @@ This folder contains examples to register custom operators into PyTorch as well

## How to run

Prerequisite: finish the [setting up wiki](https://github.com/pytorch/executorch/blob/main/docs/website/docs/tutorials/00_setting_up_executorch.md).
Prerequisite: finish the [setting up wiki](../../../docs/source/getting-started-setup.md).

Run:

```bash
bash test_custom_ops.sh
cd executorch
bash examples/portable/custom_ops/test_custom_ops.sh [cmake|buck2]
```

## AOT registration
Expand All @@ -27,7 +28,7 @@ By linking them both with `libtorch` and `executorch` library, we can build a sh

## C++ kernel registration

After the model is exported by EXIR, we need C++ implementations of these custom ops in order to run it. For example, `custom_ops_1_out.cpp` is C++ kernel that can be plugged in to ExecuTorch runtime. Other than that, we also need a way to bind the PyTorch op to this kernel. This binding is specified in `custom_ops.yaml`:
After the model is exported by EXIR, we need C++ implementations of these custom ops in order to run it. For example, `custom_ops_1_out.cpp` is a C++ kernel that can be plugged into the ExecuTorch runtime. Other than that, we also need a way to bind the PyTorch op to this kernel. This binding is specified in `custom_ops.yaml`:
```yaml
- func: my_ops::mul3.out(Tensor input, *, Tensor(a!) output) -> Tensor(a!)
kernels:
Expand Down Expand Up @@ -57,4 +58,4 @@ et_operator_library(

We then let the custom ops library depend on this target, to only register the ops we want.

For more information about selective build, please refer to [`docs/tutorials/selective_build.md`](https://github.com/pytorch/executorch/blob/main/docs/website/docs/tutorials/selective_build.md).
For more information about selective build, please refer to [`selective_build.md`](../../../docs/source/kernel-library-selective_build.md).
16 changes: 11 additions & 5 deletions examples/portable/custom_ops/test_custom_ops.sh
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ test_buck2_custom_op_1() {
# should save file custom_ops_1.pte

echo 'Running executor_runner'
buck2 run //examples/portable/executor_runner:executor_runner \
$BUCK2 run //examples/portable/executor_runner:executor_runner \
--config=executorch.register_custom_op=1 -- --model_path="./${model_name}.pte"
# should give correct result

Expand All @@ -37,7 +37,7 @@ test_cmake_custom_op_1() {
(rm -rf cmake-out \
&& mkdir cmake-out \
&& cd cmake-out \
&& retry cmake -DBUCK2=buck2 \
&& retry cmake -DBUCK2=$BUCK2 \
-DREGISTER_EXAMPLE_CUSTOM_OP=1 \
-DPYTHON_EXECUTABLE="$PYTHON_EXECUTABLE" ..)

Expand All @@ -52,13 +52,13 @@ test_buck2_custom_op_2() {
local model_name='custom_ops_2'

echo 'Building custom ops shared library'
SO_LIB=$(buck2 build //examples/portable/custom_ops:custom_ops_aot_lib_2 --show-output | grep "buck-out" | cut -d" " -f2)
SO_LIB=$($BUCK2 build //examples/portable/custom_ops:custom_ops_aot_lib_2 --show-output | grep "buck-out" | cut -d" " -f2)

echo "Exporting ${model_name}.pte"
${PYTHON_EXECUTABLE} -m "examples.portable.custom_ops.${model_name}" --so_library="$SO_LIB"
# should save file custom_ops_2.pte

buck2 run //examples/portable/executor_runner:executor_runner \
$BUCK2 run //examples/portable/executor_runner:executor_runner \
--config=executorch.register_custom_op=2 -- --model_path="./${model_name}.pte"
# should give correct result
echo "Removing ${model_name}.pte"
Expand Down Expand Up @@ -88,7 +88,7 @@ test_cmake_custom_op_2() {
(rm -rf cmake-out \
&& mkdir cmake-out \
&& cd cmake-out \
&& retry cmake -DBUCK2=buck2 \
&& retry cmake -DBUCK2=$BUCK2 \
-DREGISTER_EXAMPLE_CUSTOM_OP=2 \
-DCMAKE_PREFIX_PATH="$CMAKE_PREFIX_PATH" \
-DPYTHON_EXECUTABLE="$PYTHON_EXECUTABLE" ..)
Expand All @@ -109,6 +109,12 @@ if [[ -z $PYTHON_EXECUTABLE ]];
then
PYTHON_EXECUTABLE=python3
fi

if [[ -z $BUCK2 ]];
then
BUCK2=buck2
fi

if [[ $1 == "cmake" ]];
then
test_cmake_custom_op_1
Expand Down
5 changes: 2 additions & 3 deletions examples/qualcomm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Here are some general information and limitations.

## Prerequisite

Please finish tutorial [Setting up executorch](../../docs/website/docs/tutorials/00_setting_up_executorch.md).
Please finish tutorial [Setting up executorch](../../docs/source/getting-started-setup.md).

Please finish [setup QNN backend](../../backends/qualcomm/setup.md).

Expand Down Expand Up @@ -57,8 +57,7 @@ but the performance and accuracy number can differ.
set of SoCs. Please check QNN documents for details.

3. The mobilebert example needs to train the last classifier layer a bit, so it takes
time to run.
time to run.

4. [**Important**] Due to the numerical limits of FP16, other use cases leveraging mobileBert wouldn't
guarantee to work.

7 changes: 4 additions & 3 deletions examples/selective_build/README.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,15 @@
# Selective Build Examples
To optimize binary size of ExecuTorch runtime, selective build can be used. This folder contains examples to select only the operators needed for ExecuTorch build. We provide APIs for both CMake build and buck2 build. This example will demonstrate both. You can find more information on how to use buck2 macros in [wiki](https://github.com/pytorch/executorch/blob/main/docs/website/docs/tutorials/selective_build.md).
To optimize binary size of ExecuTorch runtime, selective build can be used. This folder contains examples to select only the operators needed for ExecuTorch build. We provide APIs for both CMake build and buck2 build. This example will demonstrate both. You can find more information on how to use buck2 macros in [wiki](../../docs/source/kernel-library-selective_build.md).

## How to run

Prerequisite: finish the [setting up wiki](https://github.com/pytorch/executorch/blob/main/docs/website/docs/tutorials/00_setting_up_executorch.md).
Prerequisite: finish the [setting up wiki](../../docs/source/getting-started-setup.md).

Run:

```bash
bash test_selective_build.sh [cmake|buck2]
cd executorch
bash examples/selective_build/test_selective_build.sh [cmake|buck2]
```

## BUCK2 examples
Expand Down