Skip to content

Commit bcec8f6

Browse files
authored
Add a CMake snippet to the XNNPACK backend doc build section (#8730)
Add CMake example to xnnpack backend doc
1 parent 3200622 commit bcec8f6

File tree

2 files changed

+15
-2
lines changed

2 files changed

+15
-2
lines changed

docs/source/backends-xnnpack.md

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,20 @@ To run the model on-device, use the standard ExecuTorch runtime APIs. See [Runni
105105

106106
The XNNPACK delegate is included by default in the published Android, iOS, and pip packages. When building from source, pass `-DEXECUTORCH_BUILD_XNNPACK=ON` when configuring the CMake build to compile the XNNPACK backend.
107107

108-
To link against the backend, add the `xnnpack_backend` CMake target as a build dependency, or link directly against `libxnnpack_backend`. Due to the use of static registration, it may be necessary to link with whole-archive. This can typically be done by passing the following flags: `-Wl,--whole-archive libxnnpack_backend.a -Wl,--no-whole-archive`.
108+
To link against the backend, add the `xnnpack_backend` CMake target as a build dependency, or link directly against `libxnnpack_backend`. Due to the use of static registration, it may be necessary to link with whole-archive. This can typically be done by passing `"$<LINK_LIBRARY:WHOLE_ARCHIVE,xnnpack_backend>"` to `target_link_libraries`.
109+
110+
```
111+
# CMakeLists.txt
112+
add_subdirectory("executorch")
113+
...
114+
target_link_libraries(
115+
my_target
116+
PRIVATE executorch
117+
executorch_module_static
118+
executorch_tensor
119+
optimized_native_cpu_ops_lib
120+
xnnpack_backend)
121+
```
109122

110123
No additional steps are necessary to use the backend beyond linking the target. Any XNNPACK-delegated .pte file will automatically run on the registered backend.
111124

docs/source/using-executorch-cpp.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ For more information on the Module class, see [Running an ExecuTorch Model Using
3636
3737
Running a model using the low-level runtime APIs allows for a high-degree of control over memory allocation, placement, and loading. This allows for advanced use cases, such as placing allocations in specific memory banks or loading a model without a file system. For an end to end example using the low-level runtime APIs, see [Running an ExecuTorch Model in C++ Tutorial](running-a-model-cpp-tutorial.md).
3838
39-
## Building with C++
39+
## Building with CMake
4040
4141
ExecuTorch uses CMake as the primary build system. Inclusion of the module and tensor APIs are controlled by the `EXECUTORCH_BUILD_EXTENSION_MODULE` and `EXECUTORCH_BUILD_EXTENSION_TENSOR` CMake options. As these APIs may not be supported on embedded systems, they are disabled by default when building from source. The low-level API surface is always included. To link, add the `executorch` target as a CMake dependency, along with `executorch_module_static` and `executorch_tensor`, if desired.
4242

0 commit comments

Comments
 (0)