You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/custom_ops/README.md
+29-2Lines changed: 29 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -19,9 +19,15 @@ We can see the example in `custom_ops_1.py` where we try to register `my_ops::mu
19
19
20
20
Notice that we need both functional variant and out variant for custom ops, because EXIR will need to perform memory planning on the out variant `my_ops::mul3_out`.
21
21
22
+
The second option is to register the custom ops into PyTorch JIT runtime using C++ APIs (`TORCH_LIBRARY`/`TORCH_LIBRARY_IMPL`). This also means we need to write C++ code and it needs to depend on `libtorch`.
23
+
24
+
We added an example in `custom_ops_2.cpp` where we implement and register `my_ops::mul4`, also `custom_ops_2_out.cpp` with an implementation for `my_ops::mul4_out`.
25
+
26
+
By linking them both with `libtorch` and `executorch` library, we can build a shared library `libcustom_ops_aot_lib_2` that can be dynamically loaded by Python environment and then register these ops into PyTorch. This is done by `torch.ops.load_library(<path_to_libcustom_ops_aot_lib_2>)` in `custom_ops_2.py`.
27
+
22
28
## C++ kernel registration
23
29
24
-
After the model is exported by EXIR, we need C++ implementations of these custom ops in order to run it. `custom_ops_1.cpp` is an example C++ kernel. Other than that, we also need a way to bind the PyTorch op to this kernel. This binding is specified in `custom_ops.yaml`:
30
+
After the model is exported by EXIR, we need C++ implementations of these custom ops in order to run it. For example, `custom_ops_1_out.cpp` is C++ kernel that can be plugged in to Executorch runtime. Other than that, we also need a way to bind the PyTorch op to this kernel. This binding is specified in `custom_ops.yaml`:
@@ -30,4 +36,25 @@ After the model is exported by EXIR, we need C++ implementations of these custom
30
36
```
31
37
For how to write these YAML entries, please refer to [`kernels/portable/README.md`](https://github.com/pytorch/executorch/blob/main/kernels/portable/README.md).
32
38
33
-
Currently we provide 2 build systems that links `my_ops::mul3.out` kernel (written in `custom_ops_1.cpp`) to Executor runtime: buck2 and CMake. Both instructions are listed in `examples/custom_ops/test_custom_ops.sh`.
39
+
Currently we provide 2 build systems that links `my_ops::mul3.out` kernel (written in `custom_ops_1.cpp`) to Executor runtime: buck2 and CMake. Both instructions are listed in `examples/custom_ops/test_custom_ops.sh`(test_buck2_custom_op_1 and test_cmake_custom_op_1).
40
+
41
+
## Selective build
42
+
43
+
Note that we have defined a custom op for both `my_ops::mul3.out` and `my_ops::mul4.out` in `custom_ops.yaml`. Below is a demonstration of how to only register the operator from the model we are running into the runtime.
44
+
45
+
In CMake, this is done by passing in a list of operators to `gen_oplist` custom rule: `--root_ops="my_ops::mul4.out"`.
46
+
47
+
In Buck2, this is done by a rule called `et_operator_library`:
48
+
```python
49
+
et_operator_library(
50
+
name = "select_custom_ops_2",
51
+
ops = [
52
+
"my_ops::mul4.out",
53
+
],
54
+
...
55
+
)
56
+
```
57
+
58
+
We then let the custom ops library depend on this target, to only register the ops we want.
59
+
60
+
For more information about selective build, please refer to [`docs/tutorials/selective_build.md`](https://github.com/pytorch/executorch/blob/main/docs/website/docs/tutorials/selective_build.md).
0 commit comments