Skip to content

Commit e7e6051

Browse files
mcr229facebook-github-bot
authored andcommitted
Temporarily add aten.op to supported quant modules (#81)
Summary: Pull Request resolved: #81 Quant flow for graph capturing is out lined here: https://fb.workplace.com/groups/257735836456307/permalink/545316467698241/ The flow becomes: ``` capture_pre_autograd_graph --> prepare --> convert --> exir.capture ``` As a result, when we capture the converted graphmodule, the source_fn is changed from torch.nn.module to <OpOverload torch.ops.aten.*>(we are recapturing a graphmodule not a torch.nn.module) I believe someone is currently working on the fix for this, but until then we have to add torch.ops.aten.* to our supported modules Differential Revision: D48488927 fbshipit-source-id: 59e0797170b6030af123ad79e63de96d8a853df2
1 parent 153b258 commit e7e6051

File tree

2 files changed

+5
-0
lines changed

2 files changed

+5
-0
lines changed

backends/xnnpack/partition/configs.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -111,6 +111,10 @@
111111
torch.nn.functional.leaky_relu,
112112
torch.nn.functional.leaky_relu_,
113113
torch.nn.LeakyReLU,
114+
# TODO(): In quant --> export flow source_fn is operator target instead of module name
115+
# This is actively being fixed, but until, we add these operator target names to partitioenr
116+
torch.ops.aten.convolution.default,
117+
torch.ops.aten.addmm.default,
114118
]
115119

116120
SUPPORTED_IMPLICIT_Q_DQ_MODULES_SET = set(SUPPORTED_QUANT_MODULES)

backends/xnnpack/passes/convert_to_linear.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,7 @@ class ConvertToLinearPass(ExportPass):
2727
linear_modules = [
2828
torch.nn.Linear,
2929
torch.nn.functional.linear,
30+
torch.ops.aten.addmm.default,
3031
]
3132

3233
targets = [

0 commit comments

Comments
 (0)