Skip to content

Commit d85b2ad

Browse files
larryliu0820facebook-github-bot
authored andcommitted
Update custom ops readme
Summary: As titled Reviewed By: JacobSzwejbka Differential Revision: D48408177 fbshipit-source-id: 6cb8d87c818c3a4cf9fc2f174ef29b5d5c2fd92b
1 parent 814414c commit d85b2ad

File tree

1 file changed

+29
-2
lines changed

1 file changed

+29
-2
lines changed

examples/custom_ops/README.md

Lines changed: 29 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,9 +19,15 @@ We can see the example in `custom_ops_1.py` where we try to register `my_ops::mu
1919

2020
Notice that we need both functional variant and out variant for custom ops, because EXIR will need to perform memory planning on the out variant `my_ops::mul3_out`.
2121

22+
The second option is to register the custom ops into PyTorch JIT runtime using C++ APIs (`TORCH_LIBRARY`/`TORCH_LIBRARY_IMPL`). This also means we need to write C++ code and it needs to depend on `libtorch`.
23+
24+
We added an example in `custom_ops_2.cpp` where we implement and register `my_ops::mul4`, also `custom_ops_2_out.cpp` with an implementation for `my_ops::mul4_out`.
25+
26+
By linking them both with `libtorch` and `executorch` library, we can build a shared library `libcustom_ops_aot_lib_2` that can be dynamically loaded by Python environment and then register these ops into PyTorch. This is done by `torch.ops.load_library(<path_to_libcustom_ops_aot_lib_2>)` in `custom_ops_2.py`.
27+
2228
## C++ kernel registration
2329

24-
After the model is exported by EXIR, we need C++ implementations of these custom ops in order to run it. `custom_ops_1.cpp` is an example C++ kernel. Other than that, we also need a way to bind the PyTorch op to this kernel. This binding is specified in `custom_ops.yaml`:
30+
After the model is exported by EXIR, we need C++ implementations of these custom ops in order to run it. For example, `custom_ops_1_out.cpp` is C++ kernel that can be plugged in to Executorch runtime. Other than that, we also need a way to bind the PyTorch op to this kernel. This binding is specified in `custom_ops.yaml`:
2531
```yaml
2632
- func: my_ops::mul3.out(Tensor input, *, Tensor(a!) output) -> Tensor(a!)
2733
kernels:
@@ -30,4 +36,25 @@ After the model is exported by EXIR, we need C++ implementations of these custom
3036
```
3137
For how to write these YAML entries, please refer to [`kernels/portable/README.md`](https://github.com/pytorch/executorch/blob/main/kernels/portable/README.md).
3238

33-
Currently we provide 2 build systems that links `my_ops::mul3.out` kernel (written in `custom_ops_1.cpp`) to Executor runtime: buck2 and CMake. Both instructions are listed in `examples/custom_ops/test_custom_ops.sh`.
39+
Currently we provide 2 build systems that links `my_ops::mul3.out` kernel (written in `custom_ops_1.cpp`) to Executor runtime: buck2 and CMake. Both instructions are listed in `examples/custom_ops/test_custom_ops.sh`(test_buck2_custom_op_1 and test_cmake_custom_op_1).
40+
41+
## Selective build
42+
43+
Note that we have defined a custom op for both `my_ops::mul3.out` and `my_ops::mul4.out` in `custom_ops.yaml`. Below is a demonstration of how to only register the operator from the model we are running into the runtime.
44+
45+
In CMake, this is done by passing in a list of operators to `gen_oplist` custom rule: `--root_ops="my_ops::mul4.out"`.
46+
47+
In Buck2, this is done by a rule called `et_operator_library`:
48+
```python
49+
et_operator_library(
50+
name = "select_custom_ops_2",
51+
ops = [
52+
"my_ops::mul4.out",
53+
],
54+
...
55+
)
56+
```
57+
58+
We then let the custom ops library depend on this target, to only register the ops we want.
59+
60+
For more information about selective build, please refer to [`docs/tutorials/selective_build.md`](https://github.com/pytorch/executorch/blob/main/docs/website/docs/tutorials/selective_build.md).

0 commit comments

Comments
 (0)