Skip to content

Commit 58fe1ab

Browse files
mcr229facebook-github-bot
authored andcommitted
fix quantized ic3/ic4/edsr examples (#574)
Summary: Pull Request resolved: #574 The fix for quantized ic3/ic4/edsr requires runnign the pass `convert_scalars_to_attrs` after capturing_pre_autograd_graph and before preparing and converting We add using this pass in the examples to fix the quantized ic3/ic4/edsr Reviewed By: digantdesai, guangy10 Differential Revision: D49850155 fbshipit-source-id: 359b7bbec45426292afe7d81ac1d816522c6e921
1 parent cbe2d99 commit 58fe1ab

File tree

1 file changed

+5
-0
lines changed

1 file changed

+5
-0
lines changed

examples/quantization/utils.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,9 @@
1111
get_symmetric_quantization_config,
1212
XNNPACKQuantizer,
1313
)
14+
from torch.ao.quantization.quantizer.xnnpack_quantizer_utils import (
15+
convert_scalars_to_attrs,
16+
)
1417

1518

1619
def quantize(model, example_inputs):
@@ -20,6 +23,8 @@ def quantize(model, example_inputs):
2023
# if we set is_per_channel to True, we also need to add out_variant of quantize_per_channel/dequantize_per_channel
2124
operator_config = get_symmetric_quantization_config(is_per_channel=False)
2225
quantizer.set_global(operator_config)
26+
# TODO(T165162973): This pass shall eventually be folded into quantizer
27+
model = convert_scalars_to_attrs(model)
2328
m = prepare_pt2e(model, quantizer)
2429
# calibration
2530
m(*example_inputs)

0 commit comments

Comments
 (0)