Skip to content

Commit 5395ae6

Browse files
authored
fold quantize in convert
Differential Revision: D61814397 Pull Request resolved: #4889
1 parent 69472e5 commit 5395ae6

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

examples/models/phi-3-mini/export_phi-3-mini.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ def export(args) -> None:
6969
)
7070
model = prepare_pt2e(model, xnnpack_quantizer) # pyre-fixme[6]
7171
model(*example_inputs)
72-
model = convert_pt2e(model, fold_quantize=False)
72+
model = convert_pt2e(model)
7373
DuplicateDynamicQuantChainPass()(model)
7474
# TODO(lunwenh): update it to use export once
7575
# https://github.com/pytorch/pytorch/issues/128394 is resolved.

0 commit comments

Comments
 (0)