Skip to content

[ExecuTorch]: Replace Executorch with ExecuTorch, Part 6/N #471

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .ci/docker/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Docker images for Executorch CI
# Docker images for ExecuTorch CI

This directory contains everything needed to build the Docker images
that are used in Executorch CI. The content of this directory are copied
that are used in ExecuTorch CI. The content of this directory are copied
from PyTorch CI https://github.com/pytorch/pytorch/tree/main/.ci/docker.
It also uses the same directory structure as PyTorch.

Expand Down
31 changes: 31 additions & 0 deletions .lintrunner.toml
Original file line number Diff line number Diff line change
Expand Up @@ -122,3 +122,34 @@ init_command = [
'--dry-run={{DRYRUN}}',
'--requirement=requirements-lintrunner.txt',
]

[[linter]]
code = 'ETCAPITAL'
include_patterns = [
'**/*.py',
'**/*.pyi',
'**/*.h',
'**/*.cpp',
'**/*.md',
'**/*.rst',
]
exclude_patterns = [
'third-party/**',
'**/third-party/**',
]
command = [
'python',
'-m',
'lintrunner_adapters',
'run',
'grep_linter',
'--pattern= Executorch\W+',
'--linter-name=ExecuTorchCapitalization',
'--error-name=Incorrect capitalization for ExecuTorch',
"""--error-description=
Please use ExecuTorch with capital T for consistency.
https://fburl.com/workplace/nsx6hib2
""",
'--',
'@{{PATHSFILE}}',
]
4 changes: 2 additions & 2 deletions docs/website/docs/ir_spec/03_backend_dialect.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ To lower edge ops to backend ops, a pass will perform pattern matching to identi
* `transform()`. An API on `ExportProgram` that allows users to provide custom passes. Note that this is not guarded by any validator so the soundness of the program is not guaranteed.
* [`ExecutorchBackendConfig.passes`](https://github.com/pytorch/executorch/blob/main/exir/capture/_config.py#L40). If added here, the pass will be part of the lowering process from backend dialect to `ExecutorchProgram`.

Example: one of such passes is `QuantFusion`. This pass takes a "canonical quantization pattern", ie. "dequant - some_op - quant" and fuse this pattern into a single operator that is backend specific, i.e. `quantized_decomposed::some_op`. You can find more details [here](../tutorials/short_term_quantization_flow.md). Another simpler example is [here](https://github.com/pytorch/executorch/blob/main/exir/passes/replace_edge_with_backend_pass.py#L20) where we replace sym_size operators to the ones that are understood by Executorch.
Example: one of such passes is `QuantFusion`. This pass takes a "canonical quantization pattern", ie. "dequant - some_op - quant" and fuse this pattern into a single operator that is backend specific, i.e. `quantized_decomposed::some_op`. You can find more details [here](../tutorials/short_term_quantization_flow.md). Another simpler example is [here](https://github.com/pytorch/executorch/blob/main/exir/passes/replace_edge_with_backend_pass.py#L20) where we replace sym_size operators to the ones that are understood by ExecuTorch.

## API

Expand All @@ -38,7 +38,7 @@ Then the operator can be accessed/used from the passes. The `CompositeImplicitAu
2. Ensures the retracability of `ExportProgram`. Once retraced, the backend operator will be decomposed into the ATen ops used in the pattern.

## Op Set
Unlike edge dialect where we have a well defined op set, for backend dialect, since it is target-aware we will be allowing user to use our API to register target-aware ops and they will be grouped by namespaces. Here are some examples: `executorch_prims` are ops that are used by Executorch runtime to perform operation on `SymInt`s. `quantized_decomposed` are ops that fuses edge operators for quantization purpose and are meaningful to targets that support quantization.
Unlike edge dialect where we have a well defined op set, for backend dialect, since it is target-aware we will be allowing user to use our API to register target-aware ops and they will be grouped by namespaces. Here are some examples: `executorch_prims` are ops that are used by ExecuTorch runtime to perform operation on `SymInt`s. `quantized_decomposed` are ops that fuses edge operators for quantization purpose and are meaningful to targets that support quantization.

* `executorch_prims::add.int(SymInt a, SymInt b) -> SymInt`
* pattern: builtin.add
Expand Down
6 changes: 3 additions & 3 deletions docs/website/docs/tutorials/00_setting_up_executorch.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Setting up Executorch
# Setting up ExecuTorch

This is a tutorial for building and installing Executorch from the GitHub repository.
This is a tutorial for building and installing ExecuTorch from the GitHub repository.

## AOT Setup [(Open on Google Colab)](https://colab.research.google.com/drive/1m8iU4y7CRVelnnolK3ThS2l2gBo7QnAP#scrollTo=1o2t3LlYJQY5)

Expand Down Expand Up @@ -125,4 +125,4 @@ or execute the binary directly from the `--show-output` path shown when building
## More Examples

The [`executorch/examples`](https://github.com/pytorch/executorch/blob/main/examples) directory contains useful examples with a guide to lower and run
popular models like MobileNet V3, Torchvision ViT, Wav2Letter, etc. on Executorch.
popular models like MobileNet V3, Torchvision ViT, Wav2Letter, etc. on ExecuTorch.
30 changes: 15 additions & 15 deletions docs/website/docs/tutorials/aten_ops_and_aten_mode.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,40 +3,40 @@

## Introduction

Executorch supports a subset of ATen-compliant operators.
ExecuTorch supports a subset of ATen-compliant operators.
ATen-compliant operators are those defined in
[`native_functions.yaml`](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/native_functions.yaml),
with their native functions (or kernels, we use these two terms interchangeably)
either defined in ATen library or other user defined libraries. The ATen-compliant operators supported by Executorch have these traits (actually same for custom ops):
either defined in ATen library or other user defined libraries. The ATen-compliant operators supported by ExecuTorch have these traits (actually same for custom ops):
1. Out variant, means these ops take an `out` argument
2. Functional except `out`. These ops shouldn't mutate input tensors other than `out`, shouldn't create aliasing views.

To give an example, `aten::add_.Tensor` is not supported since it mutates an input tensor, `aten::add.out` is supported.

ATen mode is a build-time option to link ATen library into Executorch runtime, so those registered ATen-compliant ops can use their original ATen kernels.
ATen mode is a build-time option to link ATen library into ExecuTorch runtime, so those registered ATen-compliant ops can use their original ATen kernels.

On the other hand we need to provide our custom kernels if ATen mode is off (a.k.a. lean mode).

In the next section we will walk through the steps to register ATen-compliant ops into Executorch runtime.
In the next section we will walk through the steps to register ATen-compliant ops into ExecuTorch runtime.

## Step by step guide
There are two branches for this use case:
* ATen mode. In this case we expect the exported model to be able to run with ATen kernels .
* Lean mode. This requires ATen-compliant op implementations using `ETensor`.

In a nutshell, we need the following steps in order for a ATen-compliant op to work on Executorch:
In a nutshell, we need the following steps in order for a ATen-compliant op to work on ExecuTorch:

#### ATen mode:
1. Define a target for selective build (`et_operator_library` macro)
2. Pass this target to codegen using `executorch_generated_lib` macro
3. Hookup the generated lib into Executorch runtime.
3. Hookup the generated lib into ExecuTorch runtime.

For more details on how to use selective build, check [Selective Build](https://www.internalfb.com/intern/staticdocs/executorch/docs/tutorials/custom_ops/#selective-build).
#### Lean mode:
1. Declare the op name in `functions.yaml`. Detail instruction can be found in [Declare the operator in a YAML file](https://www.internalfb.com/code/fbsource/xplat/executorch/kernels/portable/README.md).
2. (not required if using ATen mode) Implement the kernel for your operator using `ETensor`. Executorch provides a portable library for frequently used ATen-compliant ops. Check if the op you need is already there, or you can write your own kernel.
2. (not required if using ATen mode) Implement the kernel for your operator using `ETensor`. ExecuTorch provides a portable library for frequently used ATen-compliant ops. Check if the op you need is already there, or you can write your own kernel.
3. Specify the kernel namespace and function name in `functions.yaml` so codegen knows how to bind operator to its kernel.
4. Let codegen machinery generate code for either ATen mode or lean mode, and hookup the generated lib into Executorch runtime.
4. Let codegen machinery generate code for either ATen mode or lean mode, and hookup the generated lib into ExecuTorch runtime.

### Case Study
Let's say a model uses an ATen-compliant operator `aten::add.out`.
Expand Down Expand Up @@ -90,7 +90,7 @@ The corresponding `functions.yaml` for this operator looks like:
```
Notice that there are some caveats:
#### Caveats
* `dispatch` and `CPU` are legacy fields and they don't mean anything in Executorch context.
* `dispatch` and `CPU` are legacy fields and they don't mean anything in ExecuTorch context.
* Namespace `aten` is omitted.
* We don't need to write `aten::add.out` function schema because we will use the schema definition in `native_functions.yaml` as our source of truth.
* Kernel namespace in the yaml file is `custom` instead of `custom::native`. This is because codegen will append a `native` namespace automatically. It also means the kernel always needs to be defined under `<name>::native`.
Expand Down Expand Up @@ -121,9 +121,9 @@ executorch_generated_lib(
)
```
### Usage of generated lib
In the case study above, eventually we have `add_lib` which is a C++ library responsible to register `aten::add.out` into Executorch runtime.
In the case study above, eventually we have `add_lib` which is a C++ library responsible to register `aten::add.out` into ExecuTorch runtime.

In our Executorch binary target, add `add_lib` as a dependency:
In our ExecuTorch binary target, add `add_lib` as a dependency:
```python
cxx_binary(
name = "executorch_bin",
Expand All @@ -138,15 +138,15 @@ cxx_binary(
To facilitate custom operator registration, we provide the following APIs:

- `functions.yaml`: ATen-compliant operator schema and kernel metadata are defined in this file.
- `executorch_generated_lib`: the Buck rule to call Executorch codegen system and encapsulate generated C++ source files into libraries. If only include ATen-compliant operators, only one library will be generated:
- `<name>`: contains C++ source files to register ATen-compliant operators. Required by Executorch runtime.
- `executorch_generated_lib`: the Buck rule to call ExecuTorch codegen system and encapsulate generated C++ source files into libraries. If only include ATen-compliant operators, only one library will be generated:
- `<name>`: contains C++ source files to register ATen-compliant operators. Required by ExecuTorch runtime.
- Input: most of the input fields are self-explainatory.
- `deps`: kernel libraries - can be custom kernels or portable kernels (see portable kernel library [README.md](https://fburl.com/code/zlgs6zzf) on how to add more kernels) - needs to be provided. Selective build related targets should also be passed into the generated libraries through `deps`.
- `define_static_targets`: if true we will generate a `<name>_static` library with static linkage. See docstring for more information.
- `functions_yaml_target`: the target pointing to `functions.yaml`. See `ATen-compliant Operator Registration` section for more details.


We also provide selective build system to allow user to select operators from both `functions.yaml` and `custom_ops.yaml` into Executorch build. See [Selective Build](https://www.internalfb.com/intern/staticdocs/executorch/docs/tutorials/custom_ops/#selective-build) section.
We also provide selective build system to allow user to select operators from both `functions.yaml` and `custom_ops.yaml` into ExecuTorch build. See [Selective Build](https://www.internalfb.com/intern/staticdocs/executorch/docs/tutorials/custom_ops/#selective-build) section.



Expand All @@ -162,7 +162,7 @@ Nov 14 16:48:07 devvm11149.prn0.facebook.com bento[1985271]: [354870826409]Execu
Nov 14 16:48:07 devvm11149.prn0.facebook.com bento[1985271]: [354870830000]Executor.cpp:267 In function init(), assert failed (num_missing_ops == 0): There are 1 operators missing from registration to Executor. See logs for details
```

This error message indicates that the operators are not registered into the Executorch runtime.
This error message indicates that the operators are not registered into the ExecuTorch runtime.

For lean mode mode, please make sure the ATen-compliant operator schema is being added to your `functions.yaml`. For more guidance of how to write a `functions.yaml` file, please refer to [Declare the operator in a YAML file](https://www.internalfb.com/code/fbsource/xplat/executorch/kernels/portable/README.md).

Expand Down
10 changes: 5 additions & 5 deletions docs/website/docs/tutorials/backend_delegate.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,16 +76,16 @@ __ET_NODISCARD Error register_backend(const Backend& backend);
```


# How to delegate a PyTorch module to a different backend in Executorch for Model Authors
# How to delegate a PyTorch module to a different backend in ExecuTorch for Model Authors

This note is to demonstrate the basic end-to-end flow of backend delegation in
the Executorch runtime.
the ExecuTorch runtime.

At a high level, here are the steps needed for delegation:

1. Add your backend to Executorch.
1. Add your backend to ExecuTorch.
2. Frontend: lower the PyTorch module or part of the module to a backend.
3. Deployment: load and run the lowered module through Executorch runtime
3. Deployment: load and run the lowered module through ExecuTorch runtime
interface.


Expand Down Expand Up @@ -247,7 +247,7 @@ with open(save_path, "wb") as f:

## Runtime

The serialized flatbuffer model is loaded by the Executorch runtime. The
The serialized flatbuffer model is loaded by the ExecuTorch runtime. The
preprocessed blob is directly stored in the flatbuffer, which is loaded into a
call to the backend's `init()` function during model initialization stage. At
the model execution stage, the initialized handled can be executed through the
Expand Down
6 changes: 3 additions & 3 deletions docs/website/docs/tutorials/bundled_program.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,16 +19,16 @@ We need the pointer to executorch program to do the execution. To unify the proc
```c++

/**
* Finds the serialized Executorch program data in the provided file data.
* Finds the serialized ExecuTorch program data in the provided file data.
*
* The returned buffer is appropriate for constructing a
* torch::executor::Program.
*
* Calling this is only necessary if the file could be a bundled program. If the
* file will only contain an unwrapped Executorch program, callers can construct
* file will only contain an unwrapped ExecuTorch program, callers can construct
* torch::executor::Program with file_data directly.
*
* @param[in] file_data The contents of an Executorch program or bundled program
* @param[in] file_data The contents of an ExecuTorch program or bundled program
* file.
* @param[in] file_data_len The length of file_data, in bytes.
* @param[out] out_program_data The serialized Program data, if found.
Expand Down
6 changes: 3 additions & 3 deletions docs/website/docs/tutorials/cmake_build_system.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ useful to embedded systems users.
## One-time setup

1. Clone the repo and install buck2 as described in the "Runtime Setup" section
of [Setting up Executorch](00_setting_up_executorch.md#runtime-setup)
of [Setting up ExecuTorch](00_setting_up_executorch.md#runtime-setup)
- `buck2` is necessary because the CMake build system runs `buck2` commands
to extract source lists from the primary build system. It will be possible
to configure the CMake system to avoid calling `buck2`, though.
Expand All @@ -40,7 +40,7 @@ useful to embedded systems users.
calls to extract source lists from `buck2`. Consider doing this `pip
install` inside your conda environment if you created one during AOT Setup
(see [Setting up
Executorch](00_setting_up_executorch.md#aot-setup-open-on-google-colab)).
ExecuTorch](00_setting_up_executorch.md#aot-setup-open-on-google-colab)).
1. Install CMake version 3.19 or later

## Configure the CMake build
Expand Down Expand Up @@ -84,7 +84,7 @@ cmake --build cmake-out -j9

First, generate an `add.pte` or other ExecuTorch program file using the
instructions in the "AOT Setup" section of
[Setting up Executorch](00_setting_up_executorch.md#aot-setup-open-on-google-colab).
[Setting up ExecuTorch](00_setting_up_executorch.md#aot-setup-open-on-google-colab).

Then, pass it to the commandline tool:

Expand Down
Loading