Skip to content

Commit 26f05f5

Browse files
authored
Merge pull request pytorch#14 from ynimmaga/doc_changes
Documentation changes for openvino backend
2 parents c979f52 + 32e7cc7 commit 26f05f5

File tree

4 files changed

+377
-64
lines changed

4 files changed

+377
-64
lines changed

backends/openvino/README.md

Lines changed: 90 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,90 @@
1+
# OpenVINO Backend for ExecuTorch
2+
The OpenVINO backend enables optimized execution of deep learning models on Intel hardware, leveraging Intel's [OpenVINO toolkit](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html) for inference acceleration.
3+
4+
## Supported Hardware
5+
6+
OpenVINO backend supports the following hardware:
7+
8+
- Intel CPUs
9+
- Intel integrated GPUs
10+
- Intel discrete GPUs
11+
- Intel NPUs
12+
13+
## Directory Structure
14+
15+
```
16+
executorch
17+
├── backends
18+
│ └── openvino
19+
│ ├── runtime
20+
│ ├── OpenvinoBackend.cpp
21+
│ └── OpenvinoBackend.hpp
22+
│ ├── scripts
23+
│ └── openvino_build.sh
24+
│ ├── tests
25+
│ ├── CMakeLists.txt
26+
│ ├── README.md
27+
│ ├── __init__.py
28+
│ ├── openvino_functions.yaml
29+
│ ├── partitioner.py
30+
│ ├── preprocess.py
31+
│ └── requirements.txt
32+
└── examples
33+
│ └── openvino
34+
│ ├── aot
35+
│ ├── README.md
36+
│ └── aot_openvino_compiler.py
37+
│ └── executor_runner
38+
│ └── openvino_executor_runner.cpp
39+
│ ├── CMakeLists.txt
40+
│ ├── README.md
41+
└── └── openvino_build_example.sh
42+
```
43+
44+
## Build Instructions
45+
46+
### Prerequisites
47+
48+
Before you begin, ensure you have openvino installed and configured on your system:
49+
50+
## TODO: Update with the openvino commit/Release tag once the changes in OpenVINO are merged
51+
## TODO: Add instructions for support with OpenVINO release package
52+
53+
```bash
54+
git clone -b executorch_ov_backend https://github.com/ynimmaga/openvino
55+
cd openvino
56+
git submodule update --init --recursive
57+
mkdir build
58+
cd build
59+
cmake .. -DCMAKE_BUILD_TYPE=Release -DENABLE_PYTHON=ON
60+
make -j<N>
61+
62+
cd ..
63+
cmake --install build --prefix <your_preferred_install_location>
64+
cd <your_preferred_install_location>
65+
source setupvars.sh
66+
```
67+
68+
### Setup
69+
70+
Follow the steps below to setup your build environment:
71+
72+
1. **Setup ExecuTorch Environment**: Refer to the [Environment Setup](https://pytorch.org/executorch/stable/getting-started-setup#environment-setup) guide for detailed instructions on setting up the ExecuTorch environment.
73+
74+
2. **Setup OpenVINO Backend Environment**
75+
- Install the dependent libs. Ensure that you are inside `executorch/backends/openvino/` directory
76+
```bash
77+
pip install -r requirements.txt
78+
```
79+
80+
3. Navigate to `scripts/` directory.
81+
82+
4. **Build OpenVINO Backend**: Once the prerequisites are in place, run the `openvino_build.sh` script to start the build process, OpenVINO backend will be built under `cmake-openvino-out/backends/openvino/` as `libopenvino_backend.so`
83+
84+
```bash
85+
./openvino_build.sh
86+
```
87+
88+
### Run
89+
90+
Please refer to [README.md](../../examples/openvino/README.md) for instructions on running examples of various of models with openvino backend.

docs/source/build-run-openvino.md

Lines changed: 202 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,202 @@
1+
# Building and Running ExecuTorch with OpenVINO Backend
2+
3+
In this tutorial we will walk you through the process of setting up the prerequisites, building OpenVINO backend library, exporting `.pte` models with OpenVINO optimizations, and executing the exported models on Intel hardware.
4+
5+
<!----This will show a grid card on the page----->
6+
::::{grid} 2
7+
:::{grid-item-card} What you will learn in this tutorial:
8+
:class-card: card-prerequisites
9+
* In this tutorial you will learn how to lower and deploy a model with OpenVINO.
10+
:::
11+
:::{grid-item-card} Tutorials we recommend you complete before this:
12+
:class-card: card-prerequisites
13+
* [Introduction to ExecuTorch](intro-how-it-works.md)
14+
* [Setting up ExecuTorch](getting-started-setup.md)
15+
* [Building ExecuTorch with CMake](runtime-build-and-cross-compilation.md)
16+
:::
17+
::::
18+
19+
## Introduction to OpenVINO
20+
21+
[OpenVINO](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html) is an open-source toolkit designed to enhance AI inference on Intel hardware by reducing latency and increasing throughput while preserving accuracy. It optimizes hardware utilization and simplifies AI development and deep learning integration across domains such as computer vision, large language models (LLMs), and generative AI.
22+
23+
OpenVINO is integrated as an Executorch delegate to accelerate AI applications deployed with Executorch APIs.
24+
25+
## Supported Hardware
26+
27+
OpenVINO backend supports the following hardware:
28+
29+
- Intel CPUs
30+
- Intel integrated GPUs
31+
- Intel discrete GPUs
32+
- Intel NPUs
33+
34+
## Directory Structure
35+
36+
```
37+
executorch
38+
├── backends
39+
│ └── openvino
40+
│ ├── runtime
41+
│ ├── OpenvinoBackend.cpp
42+
│ └── OpenvinoBackend.hpp
43+
│ ├── scripts
44+
│ └── openvino_build.sh
45+
│ ├── tests
46+
│ ├── CMakeLists.txt
47+
│ ├── README.md
48+
│ ├── __init__.py
49+
│ ├── openvino_functions.yaml
50+
│ ├── partitioner.py
51+
│ ├── preprocess.py
52+
│ └── requirements.txt
53+
└── examples
54+
│ └── openvino
55+
│ ├── aot
56+
│ ├── README.md
57+
│ └── aot_openvino_compiler.py
58+
│ └── executor_runner
59+
│ └── openvino_executor_runner.cpp
60+
│ ├── CMakeLists.txt
61+
│ ├── README.md
62+
└── └── openvino_build_example.sh
63+
```
64+
65+
## Instructions for Building OpenVINO Backend
66+
67+
### Prerequisites
68+
69+
Before you begin, ensure you have openvino installed and configured on your system:
70+
71+
#### TODO: Update with the openvino commit/Release tag once the changes in OpenVINO are merged
72+
#### TODO: Add instructions for support with OpenVINO release package
73+
74+
```bash
75+
git clone -b executorch_ov_backend https://github.com/ynimmaga/openvino
76+
cd openvino
77+
git submodule update --init --recursive
78+
mkdir build
79+
cd build
80+
cmake .. -DCMAKE_BUILD_TYPE=Release -DENABLE_PYTHON=ON -DENABLE_WHEEL=ON
81+
make -j<N>
82+
83+
cd ../..
84+
cmake --install build --prefix <your_preferred_install_location>
85+
cd <your_preferred_install_location>
86+
source setupvars.sh
87+
```
88+
89+
### Setup
90+
91+
Follow the steps below to setup your build environment:
92+
93+
1. **Setup ExecuTorch Environment**: Refer to the [Environment Setup](https://pytorch.org/executorch/stable/getting-started-setup#environment-setup) guide for detailed instructions on setting up the ExecuTorch environment.
94+
95+
2. **Setup OpenVINO Backend Environment**
96+
- Install the dependent libs. Ensure that you are inside `executorch/backends/openvino/` directory
97+
```bash
98+
pip install -r requirements.txt
99+
```
100+
101+
3. Navigate to `scripts/` directory.
102+
103+
4. **Build OpenVINO Backend**: Once the prerequisites are in place, run the `openvino_build.sh` script to start the build process, OpenVINO backend will be built under `cmake-openvino-out/backends/openvino/` as `libopenvino_backend.so`
104+
105+
```bash
106+
./openvino_build.sh
107+
```
108+
109+
## Build Instructions for Examples
110+
111+
### AOT step:
112+
Refer to the [README.md](aot/README.md) in the `aot` folder for detailed instructions on exporting deep learning models from various model suites (TIMM, Torchvision, Hugging Face) to openvino backend using Executorch. Users can dynamically specify the model, input shape, and target device.
113+
114+
Below is an example to export a ResNet50 model from Torchvision model suite for CPU device with an input shape of `[1, 3, 256, 256]`
115+
116+
```bash
117+
cd aot
118+
python aot_openvino_compiler.py --suite torchvision --model resnet50 --input_shape "(1, 3, 256, 256)" --device CPU
119+
```
120+
The exported model will be saved as 'resnet50.pte' in the current directory.
121+
122+
#### **Arguments**
123+
- **`--suite`** (required):
124+
Specifies the model suite to use.
125+
Supported values:
126+
- `timm` (e.g., VGG16, ResNet50)
127+
- `torchvision` (e.g., resnet18, mobilenet_v2)
128+
- `huggingface` (e.g., bert-base-uncased)
129+
130+
- **`--model`** (required):
131+
Name of the model to export.
132+
Examples:
133+
- For `timm`: `vgg16`, `resnet50`
134+
- For `torchvision`: `resnet18`, `mobilenet_v2`
135+
- For `huggingface`: `bert-base-uncased`, `distilbert-base-uncased`
136+
137+
- **`--input_shape`** (required):
138+
Input shape for the model. Provide this as a **list** or **tuple**.
139+
Examples:
140+
- `[1, 3, 224, 224]` (Zsh users: wrap in quotes)
141+
- `(1, 3, 224, 224)`
142+
143+
- **`--device`** (optional):
144+
Target device for the compiled model. Default is `CPU`.
145+
Examples: `CPU`, `GPU`
146+
147+
### Build C++ OpenVINO Examples
148+
Build the backend and the examples by executing the script:
149+
```bash
150+
./openvino_build_example.sh
151+
```
152+
The executable is saved in `<executorch_root>/cmake-openvino-out/examples/openvino/`
153+
154+
Now, run the example using the executable generated in the above step. The executable requires a model file (`.pte` file generated in the aot step), number of inference iterations, and optional input/output paths.
155+
156+
#### Command Syntax:
157+
158+
```
159+
cd ../../cmake-openvino-out/examples/openvino
160+
161+
./openvino_executor_runner \
162+
--model_path=<path_to_model> \
163+
--num_iter=<iterations> \
164+
[--input_list_path=<path_to_input_list>] \
165+
[--output_folder_path=<path_to_output_folder>]
166+
```
167+
#### Command-Line Arguments
168+
169+
- `--model_path`: (Required) Path to the model serialized in `.pte` format.
170+
- `--num_iter`: (Optional) Number of times to run inference (default: 1).
171+
- `--input_list_path`: (Optional) Path to a file containing the list of raw input tensor files.
172+
- `--output_folder_path`: (Optional) Path to a folder where output tensor files will be saved.
173+
174+
#### Example Usage
175+
176+
Run inference with a given model for 10 iterations and save outputs:
177+
178+
```
179+
./openvino_executor_runner \
180+
--model_path=model.pte \
181+
--num_iter=10 \
182+
--output_folder_path=outputs/
183+
```
184+
185+
Run inference with an input tensor file:
186+
187+
```
188+
./openvino_executor_runner \
189+
--model_path=model.pte \
190+
--num_iter=5 \
191+
--input_list_path=input_list.txt \
192+
--output_folder_path=outputs/
193+
```
194+
195+
## Supported model list
196+
197+
### TODO
198+
199+
## FAQ
200+
201+
If you encounter any issues while reproducing the tutorial, please file a github
202+
issue on ExecuTorch repo and tag use `#openvino` tag

examples/openvino/README.md

Lines changed: 85 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,85 @@
1+
# OpenVINO Backend Examples
2+
3+
This guide provides detailed instructions on how to export models for Executorch and execute them on the OpenVINO backend. The examples demonstrate how to export a model, load a model, prepare input tensors, execute inference, and save the output results.
4+
5+
## Directory Structure
6+
7+
Below is the layout of the `examples/openvino` directory, which includes the necessary files for the example applications:
8+
9+
```
10+
examples/openvino
11+
├── aot # Directory with scripts and instructions for AoT export
12+
├── README.md # Instructions to export models to '.pte'
13+
└── aot_openvino_compiler.py # Example script for AoT export
14+
├── executor_runner # Directory with examples for C++ execution
15+
└── openvino_executor_runner.cpp # Example C++ file for execution
16+
├── CMakeLists.txt # CMake build configuration to build examples
17+
├── README.md # Documentation for examples (this file)
18+
└── openvino_build_example.sh # Script to build examples for openvino backend
19+
```
20+
21+
# Build Instructions for Examples
22+
23+
## Environment Setup
24+
Follow the [instructions](../../backends/openvino/README.md) of **Prerequisites** and **Setup** in `backends/openvino/README.md` to set up the OpenVINO backend.
25+
26+
## AOT step:
27+
Refer to the [README.md](aot/README.md) in the `aot` folder for detailed instructions on exporting deep learning models from various model suites (TIMM, Torchvision, Hugging Face) to openvino backend using Executorch. Users can dynamically specify the model, input shape, and target device.
28+
29+
Below is an example to export a ResNet50 model from Torchvision model suite for CPU device with an input shape of `[1, 3, 256, 256]`
30+
31+
```bash
32+
cd aot
33+
python aot_openvino_compiler.py --suite torchvision --model resnet50 --input_shape "(1, 3, 256, 256)" --device CPU
34+
```
35+
The exported model will be saved as 'resnet50.pte' in the current directory.
36+
37+
## Build OpenVINO Examples
38+
Build the backend and the examples by executing the script:
39+
```bash
40+
./openvino_build_example.sh
41+
```
42+
The executable is saved in `<executorch_root>/cmake-openvino-out/examples/openvino/`
43+
44+
### Run the example
45+
46+
Now, run the example using the executable generated in the above step. The executable requires a model file (`.pte` file generated in the aot step), number of inference iterations, and optional input/output paths.
47+
48+
#### Command Syntax:
49+
50+
```
51+
cd ../../cmake-openvino-out/examples/openvino
52+
53+
./openvino_executor_runner \
54+
--model_path=<path_to_model> \
55+
--num_iter=<iterations> \
56+
[--input_list_path=<path_to_input_list>] \
57+
[--output_folder_path=<path_to_output_folder>]
58+
```
59+
#### Command-Line Arguments
60+
61+
- `--model_path`: (Required) Path to the model serialized in `.pte` format.
62+
- `--num_iter`: (Optional) Number of times to run inference (default: 1).
63+
- `--input_list_path`: (Optional) Path to a file containing the list of raw input tensor files.
64+
- `--output_folder_path`: (Optional) Path to a folder where output tensor files will be saved.
65+
66+
#### Example Usage
67+
68+
Run inference with a given model for 10 iterations and save outputs:
69+
70+
```
71+
./openvino_executor_runner \
72+
--model_path=model.pte \
73+
--num_iter=10 \
74+
--output_folder_path=outputs/
75+
```
76+
77+
Run inference with an input tensor file:
78+
79+
```
80+
./openvino_executor_runner \
81+
--model_path=model.pte \
82+
--num_iter=5 \
83+
--input_list_path=input_list.txt \
84+
--output_folder_path=outputs/
85+
```

0 commit comments

Comments
 (0)