Skip to content

Commit 81c1cc3

Browse files
mcr229mergennachin
authored andcommitted
tutorial update (#870)
Summary: Pull Request resolved: #870 seems like we want to be using longterm apis now D49972005 so updating docs to match such usage Reviewed By: kirklandsign Differential Revision: D50235705 fbshipit-source-id: c71b46fff333eac10e938cca922b6a32852d7b5c
1 parent 6e5938e commit 81c1cc3

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

docs/source/tutorial-xnnpack-delegate-lowering.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ sample_inputs = (torch.randn(1, 3, 224, 224), )
3434
edge = export_to_edge(mobilenet_v2, example_inputs)
3535

3636

37-
edge.to_backend(XnnpackPartitioner)
37+
edge = edge.to_backend(XnnpackPartitioner)
3838
```
3939

4040
We will go through this example with the MobileNetV2 pretrained model downloaded from the TorchVision library. The flow of lowering a model starts after exporting the model `to_edge`. We call the `to_backend` api with the `XnnpackPartitioner`. The partitioner identifies the subgraphs suitable for XNNPACK backend delegate to consume. Afterwards, the identified subgraphs will be serialized with the XNNPACK Delegate flatbuffer schema and each subgraph will be replaced with a call to the XNNPACK Delegate.
@@ -92,7 +92,7 @@ edge = export_to_edge(
9292
example_inputs,
9393
edge_compile_config=EdgeCompileConfig(_check_ir_validity=False)
9494
)
95-
edge.to_backend(XnnpackPartitioner)
95+
edge = edge.to_backend(XnnpackPartitioner)
9696

9797
exec_prog = edge.to_executorch()
9898
save_pte_program(exec_prog.buffer, "qs8_xnnpack_mobilenetv2.pte")

0 commit comments

Comments
 (0)