Skip to content

Commit 6cae36f

Browse files
mcr229facebook-github-bot
authored andcommitted
Temporarily add aten.op to supported quant modules, Commit 1
Differential Revision: https://internalfb.com/D48488927 fbshipit-source-id: 496fd4244abb273d8f42216e146126e2034e34c7
1 parent 1f050ec commit 6cae36f

File tree

2 files changed

+5
-0
lines changed

2 files changed

+5
-0
lines changed

backends/xnnpack/partition/configs.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -111,6 +111,10 @@
111111
torch.nn.functional.leaky_relu,
112112
torch.nn.functional.leaky_relu_,
113113
torch.nn.LeakyReLU,
114+
# TODO(): In quant --> export flow source_fn is operator target instead of module name
115+
# This is actively being fixed, but until, we add these operator target names to partitioenr
116+
torch.ops.aten.convolution.default,
117+
torch.ops.aten.addmm.default,
114118
]
115119

116120
SUPPORTED_IMPLICIT_Q_DQ_MODULES_SET = set(SUPPORTED_QUANT_MODULES)

backends/xnnpack/passes/convert_to_linear.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,7 @@ class ConvertToLinearPass(ExportPass):
2727
linear_modules = [
2828
torch.nn.Linear,
2929
torch.nn.functional.linear,
30+
torch.ops.aten.addmm.default,
3031
]
3132

3233
targets = [

0 commit comments

Comments
 (0)