Skip to content

Commit 0c38c60

Browse files
haowhsu-quicfacebook-github-bot
authored andcommitted
Qualcomm AI Engine Direct - refactor documents (#1168)
Summary: - Remove duplicated document under backend/qualcomm/setup.md and point it to docs/source/build-run-qualcomm-ai-engine-direct-backend.md - Fix typos, breakage of links and add keyword hint for QNN Pull Request resolved: #1168 Reviewed By: kirklandsign Differential Revision: D53947053 Pulled By: cccclai fbshipit-source-id: 99bbb56dbfc1365a51fe383a5f4fb9a17a3d6331
1 parent a704dd6 commit 0c38c60

File tree

2 files changed

+29
-12
lines changed

2 files changed

+29
-12
lines changed

backends/qualcomm/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ we reserve the right to modify interfaces and implementations.
66

77
This backend is implemented on the top of
88
[Qualcomm AI Engine Direct SDK](https://developer.qualcomm.com/software/qualcomm-ai-engine-direct-sdk).
9-
Please follow [setup](setup.md) to setup environment, build, and run executorch models by this backend.
9+
Please follow [tutorial](https://pytorch.org/executorch/stable/build-run-qualcomm-ai-engine-direct-backend.html) to setup environment, build, and run executorch models by this backend (Qualcomm AI Engine Direct is also referred to as QNN in the source and documentation).
1010

1111
## Delegate Options
1212

docs/source/build-run-qualcomm-ai-engine-direct-backend.md

Lines changed: 28 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Building and Running ExecuTorch with Qualcomm AI Engine Direct Backend
22

3-
In this tutorial we will walk you through the process of getting setup to
3+
In this tutorial we will walk you through the process of getting started to
44
build ExecuTorch for Qualcomm AI Engine Direct and running a model on it.
55

66
Qualcomm AI Engine Direct is also referred to as QNN in the source and documentation.
@@ -53,7 +53,7 @@ This example is verified with SM8550 and SM8450.
5353

5454
- Follow ExecuTorch recommneded Python version.
5555
- A compiler to compile AOT parts. GCC 9.4 come with Ubuntu20.04 is verified.
56-
- Android NDK. This example is verified with NDK 25c.
56+
- [Android NDK](https://developer.android.com/ndk). This example is verified with NDK 25c.
5757
- [Qualcomm AI Engine Direct SDK](https://developer.qualcomm.com/software/qualcomm-ai-engine-direct-sdk)
5858
- Follow the download button. After logging in, search Qualcomm AI Stack at the *Tool* panel.
5959
- You can find Qualcomm AI Engine Direct SDK under the AI Stack group.
@@ -97,7 +97,7 @@ i.e., the directory containing `QNN_README.txt`.
9797
We set `LD_LIBRARY_PATH` to make sure the dynamic linker can find QNN libraries.
9898

9999
Further, we set `PYTHONPATH` because it's easier to develop and import ExecuTorch
100-
Pytho APIs.
100+
Python APIs.
101101

102102
```bash
103103
export LD_LIBRARY_PATH=$QNN_SDK_ROOT/lib/x86_64-linux-clang/:$LD_LIBRARY_PATH
@@ -106,7 +106,7 @@ export PYTHONPATH=$EXECUTORCH_ROOT/..
106106

107107
## Build
108108

109-
An example script for below building instructions is [here](../../backends/qualcomm/scripts/build.sh).
109+
An example script for below building instructions is [here](https://github.com/pytorch/executorch/blob/main/backends/qualcomm/scripts/build.sh).
110110

111111
### AOT (Ahead-of-time) components:
112112

@@ -135,23 +135,40 @@ Commands to build `qnn_executor_runner` for Android:
135135
cd $EXECUTORCH_ROOT
136136
mkdir build_android
137137
cd build_android
138-
cmake .. -DQNN_SDK_ROOT=$QNN_SDK_ROOT \
138+
# build executorch & qnn_executorch_backend
139+
cmake .. \
140+
-DBUCK2=buck2 \
141+
-DCMAKE_INSTALL_PREFIX=$PWD \
139142
-DEXECUTORCH_BUILD_QNN=ON \
143+
-DQNN_SDK_ROOT=$QNN_SDK_ROOT \
140144
-DCMAKE_TOOLCHAIN_FILE=$ANDROID_NDK/build/cmake/android.toolchain.cmake \
141145
-DANDROID_ABI='arm64-v8a' \
142146
-DANDROID_NATIVE_API_LEVEL=23 \
143-
-DBUCK2=buck2
144-
cmake --build . -j8
147+
-B$PWD
148+
149+
cmake --build $PWD -j16 --target install
150+
151+
cmake ../examples/qualcomm \
152+
-DCMAKE_TOOLCHAIN_FILE=$ANDROID_NDK/build/cmake/android.toolchain.cmake \
153+
-DANDROID_ABI='arm64-v8a' \
154+
-DANDROID_NATIVE_API_LEVEL=23 \
155+
-DCMAKE_PREFIX_PATH="$PWD/lib/cmake/ExecuTorch;$PWD/third-party/gflags;" \
156+
-DCMAKE_FIND_ROOT_PATH_MODE_PACKAGE=BOTH \
157+
-Bexamples/qualcomm
158+
159+
cmake --build examples/qualcomm -j16
145160
```
146161

162+
**Note:** If you want to build for release, add `-DCMAKE_BUILD_TYPE=Release` to the `cmake` command options.
163+
147164
You can find `qnn_executor_runner` under `build_android/examples/qualcomm/`.
148165

149166

150167
## Deploying and running on device
151168

152169
### AOT compile a model
153170

154-
You can refer to [this script](../../examples/qualcomm/scripts/deeplab_v3.py) for the exact flow.
171+
You can refer to [this script](https://github.com/pytorch/executorch/blob/main/examples/qualcomm/scripts/deeplab_v3.py) for the exact flow.
155172
We use deeplab-v3-resnet101 as an example in this tutorial. Run below commands to compile:
156173

157174
```
@@ -179,7 +196,7 @@ output output output ([getitem_
179196
The compiled model is `./deeplab_v3/dlv3_qnn.pte`.
180197

181198

182-
### Run model Inference on an Android smartphone with Qualcomm SoCs
199+
### Run model inference on an Android smartphone with Qualcomm SoCs
183200

184201
***Step 1***. We need to push required QNN libraries to the device.
185202

@@ -222,12 +239,12 @@ I 00:00:01.835706 executorch:qnn_executor_runner.cpp:298] 100 inference took 109
222239
### Running a model via ExecuTorch's android demo-app
223240

224241
An Android demo-app using Qualcomm AI Engine Direct Backend can be found in
225-
`examples`. Please refer to android demo app tutorial.
242+
`examples`. Please refer to android demo app [tutorial](https://pytorch.org/executorch/stable/demo-apps-android.html).
226243

227244

228245
## What is coming?
229246

230-
- [An example using quantized mobilebert](https://github.com/pytorch/executorch/pull/658) to solve multi-class text classification.
247+
- [An example using quantized mobilebert](https://github.com/pytorch/executorch/pull/1043) to solve multi-class text classification.
231248
- More Qualcomm AI Engine Direct accelerators, e.g., GPU.
232249

233250
## FAQ

0 commit comments

Comments
 (0)