Skip to content

Commit c86d0d0

Browse files
tarun292facebook-github-bot
authored andcommitted
Use source_fn_stack in xnnpack tutorial (#5948)
Summary: Fix XNNPack tutorial to use `source_fn_stack` instead of `source_fn`. Pull Request resolved: #5948 Reviewed By: dvorjackz Differential Revision: D63950962 fbshipit-source-id: 5b4ced1c7edee4f5d60e9bffb8ab7a4a82788fcb
1 parent 59cc817 commit c86d0d0

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

docs/source/native-delegates-executorch-xnnpack-delegate.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -25,18 +25,18 @@ The partitioner is implemented by backend delegates to mark nodes suitable for l
2525

2626
##### Module-based partitioning
2727

28-
`source_fn` is embedded in the node’s metadata and gives information on where these nodes come from. For example, modules like `torch.nn.Linear` when captured and exported `to_edge` generate groups of nodes for their computation. The group of nodes associated with computing the linear module then has a `source_fn` of `torch.nn.Linear. Partitioning based on `source_fn` allows us to identify groups of nodes which are lowerable via XNNPACK.
28+
`source_fn_stack` is embedded in the node’s metadata and gives information on where these nodes come from. For example, modules like `torch.nn.Linear` when captured and exported `to_edge` generate groups of nodes for their computation. The group of nodes associated with computing the linear module then has a `source_fn_stack` of `torch.nn.Linear. Partitioning based on `source_fn_stack` allows us to identify groups of nodes which are lowerable via XNNPACK.
2929

3030
For example after capturing `torch.nn.Linear` you would find the following key in the metadata for the addmm node associated with linear:
3131
```python
32-
>>> print(linear_node.meta["source_fn"])
33-
'source_fn': ('fn', <class 'torch.nn.modules.linear.Linear'>)
32+
>>> print(linear_node.meta["source_fn_stack"])
33+
'source_fn_stack': ('fn', <class 'torch.nn.modules.linear.Linear'>)
3434
```
3535

3636

3737
##### Op-based partitioning
3838

39-
The `XnnpackPartitioner` also partitions using op targets. It traverses the graph and identifies individual nodes which are lowerable to XNNPACK. A drawback to module-based partitioning is that operators which come from [decompositions](https://github.com/pytorch/pytorch/blob/main/torch/_decomp/decompositions.py) may be skipped. For example, an operator like `torch.nn.Hardsigmoid` is decomposed into add, muls, divs, and clamps. While hardsigmoid is not lowerable, we can lower the decomposed ops. Relying on `source_fn` metadata would skip these lowerables because they belong to a non-lowerable module, so in order to improve model performance, we greedily lower operators based on the op targets as well as the `source_fn`.
39+
The `XnnpackPartitioner` also partitions using op targets. It traverses the graph and identifies individual nodes which are lowerable to XNNPACK. A drawback to module-based partitioning is that operators which come from [decompositions](https://github.com/pytorch/pytorch/blob/main/torch/_decomp/decompositions.py) may be skipped. For example, an operator like `torch.nn.Hardsigmoid` is decomposed into add, muls, divs, and clamps. While hardsigmoid is not lowerable, we can lower the decomposed ops. Relying on `source_fn_stack` metadata would skip these lowerables because they belong to a non-lowerable module, so in order to improve model performance, we greedily lower operators based on the op targets as well as the `source_fn_stack`.
4040

4141
##### Passes
4242

0 commit comments

Comments
 (0)