-
Notifications
You must be signed in to change notification settings - Fork 608
Add pass for replacing dq-q patterns with rescale #8415
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,109 @@ | ||
# Copyright 2025 Arm Limited and/or its affiliates. | ||
# | ||
# This source code is licensed under the BSD-style license found in the | ||
# LICENSE file in the root directory of this source tree. | ||
|
||
import logging | ||
from copy import copy | ||
from typing import cast | ||
|
||
import torch | ||
from executorch.backends.arm._passes.arm_pass_utils import create_node | ||
from executorch.backends.arm.tosa_quant_utils import dq_op, q_op, QuantArgs | ||
from executorch.exir.pass_base import ExportPass, PassResult | ||
from torch import Tensor | ||
from torch.fx import GraphModule, Node | ||
from torch.library import custom_op, register_fake | ||
|
||
logger = logging.getLogger(__name__) | ||
|
||
|
||
@custom_op("tosa::_rescale", mutates_args=()) # type: ignore[misc] | ||
def rescale( | ||
x: Tensor, dtype: torch.dtype, scale: float, in_zp: int, out_zp: int | ||
) -> Tensor: | ||
logger.warning( | ||
"Ran default implementation of tosa::_rescale." | ||
"This op is meant to always be inserted inside a partition and a correct default implementation is not implemented." | ||
) | ||
# Clone is needed to not return reference when rescaling to same dtype. | ||
# This is a neccessary requirement for non-mutating custom ops. | ||
return x.to(dtype=dtype).clone() | ||
|
||
|
||
@register_fake("tosa::_rescale") # type: ignore[misc] | ||
def rescale_fake( | ||
x: Tensor, dtype: torch.dtype, scale: float, in_zp: int, out_zp: int | ||
) -> Tensor: | ||
"""Casts the input tensor to dtype `dtype` to produce the correct tensor meta for a _rescale op. | ||
Additionally validates TOSA constraints of a RESCALE op. | ||
""" | ||
if not (dtype == torch.int32 or dtype == torch.int8): | ||
raise NotImplementedError( | ||
"tosa::rescale currently only supports int32 and int8." | ||
) | ||
if dtype == torch.int32 and out_zp != 0: | ||
raise ValueError( | ||
"TOSA requires output_zp to be zero when the output dtype is int32." | ||
) | ||
if x.dtype == torch.int32 and in_zp != 0: | ||
raise ValueError( | ||
"TOSA requires input_zp to be zero when the input dtype is int32." | ||
) | ||
if x.dtype == torch.int8 and not -128 <= in_zp <= 127: | ||
raise ValueError(f"{in_zp=} outside valid range (-128,127) for int8.") | ||
if dtype == torch.int8 and not -128 <= out_zp <= 127: | ||
raise ValueError(f"{out_zp=} outside valid range (-128,127) for int8.") | ||
|
||
return x.to(dtype=dtype).clone() | ||
|
||
|
||
class InsertRescalePass(ExportPass): | ||
"""Finds patterns of dq -> q, and replaces them | ||
with passthrough_to_tosa::rescales. | ||
|
||
Does not garantuee that the dtypes and zero points are valid | ||
in TOSA, that is the job of the quantization annotator that | ||
produced the dq and q nodes. The TOSA constraints are validated | ||
in the fake implementation of passthrough_to_tosa:rescale. | ||
""" | ||
|
||
def fold_dq_q_to_rescale(self, node: Node, user: Node, graph_module: GraphModule): | ||
dq_args = QuantArgs.from_operator(node.target, node.args) | ||
q_args = QuantArgs.from_operator(user.target, user.args) | ||
new_scale = dq_args.scale / q_args.scale | ||
|
||
with graph_module.graph.inserting_before(node): | ||
rescale_node = create_node( | ||
graph_module.graph, | ||
torch.ops.tosa._rescale.default, | ||
( | ||
node.all_input_nodes[0], | ||
q_args.dtype, | ||
new_scale, | ||
dq_args.zp, | ||
q_args.zp, | ||
), | ||
) | ||
rescale_node.meta = copy(user.meta) | ||
user.replace_all_uses_with(rescale_node) | ||
graph_module.graph.erase_node(user) | ||
|
||
def call(self, graph_module: GraphModule) -> PassResult: | ||
modified = False | ||
for node in graph_module.graph.nodes: | ||
node = cast(Node, node) | ||
|
||
if node.target is not dq_op: | ||
continue | ||
# Copy users since we remove them while iterating, modyfing the node.users list. | ||
for user in copy(node.users): | ||
if user.target is q_op: | ||
self.fold_dq_q_to_rescale(node, user, graph_module) | ||
modified = True | ||
if len(node.users) == 0: | ||
graph_module.graph.erase_node(node) | ||
|
||
graph_module = super().call(graph_module).graph_module | ||
graph_module.recompile() | ||
return PassResult(graph_module, modified) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -31,6 +31,7 @@ | |
op_reciprocal, | ||
op_relu, | ||
op_repeat, | ||
op_rescale, | ||
op_rshift, | ||
op_rsqrt, | ||
op_sigmoid, | ||
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,70 @@ | ||
# Copyright 2025 Arm Limited and/or its affiliates. | ||
# | ||
# This source code is licensed under the BSD-style license found in the | ||
# LICENSE file in the root directory of this source tree. | ||
|
||
# pyre-unsafe | ||
|
||
from typing import cast, List | ||
|
||
import executorch.backends.arm.tosa_quant_utils as tosa_quant_utils | ||
import serializer.tosa_serializer as ts # type: ignore | ||
import torch | ||
|
||
import tosa.Op as TosaOp # type: ignore | ||
from executorch.backends.arm.operators.node_visitor import ( | ||
NodeVisitor, | ||
register_node_visitor, | ||
) | ||
from executorch.backends.arm.tosa_mapping import map_dtype, TosaArg | ||
from torch.fx import Node | ||
|
||
|
||
@register_node_visitor | ||
class RescaleVisitor(NodeVisitor): | ||
target = "_rescale.default" | ||
|
||
def define_node( | ||
self, | ||
node: Node, | ||
tosa_graph: ts.TosaSerializer, | ||
inputs: List[TosaArg], | ||
output: TosaArg, | ||
) -> None: | ||
|
||
input_dtype = inputs[0].dtype | ||
output_dtype = cast(torch.dtype, node.args[1]) | ||
scale = cast(float, node.args[2]) | ||
input_zp = cast(int, node.args[3]) | ||
output_zp = cast(int, node.args[4]) | ||
|
||
# Skip int16 cases for now. | ||
if input_dtype != map_dtype(torch.int8) and input_zp != 0: | ||
raise ValueError( | ||
f"If input dtype is not int8, input_zp must be 0. Got input_dtype{ts.DTypeNames[input_dtype]}, {input_zp=}" | ||
) | ||
if output_dtype != torch.int8 and output_zp != 0: | ||
raise ValueError( | ||
f"If output dtype is not int8, output_zp must be 0. Got {output_dtype=}, {output_zp=}" | ||
) | ||
|
||
scale_width = 32 if output_dtype == torch.int32 else 16 | ||
multiplier, shift = tosa_quant_utils.compute_multiplier_and_shift( | ||
scale, scale_width | ||
) | ||
attr_rescale = ts.TosaSerializerAttribute() | ||
attr_rescale.RescaleAttribute( | ||
input_zp=input_zp, | ||
output_zp=output_zp, | ||
multiplier=[multiplier], | ||
shift=[shift], | ||
scale32=output_dtype == torch.int32, | ||
double_round=False, | ||
per_channel=False, | ||
input_unsigned=False, | ||
output_unsigned=False, | ||
) | ||
|
||
tosa_graph.addOperator( | ||
TosaOp.Op().RESCALE, [inputs[0].name], [output.name], attr_rescale | ||
) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems to have broken the unittest-arm job:
https://github.com/pytorch/executorch/actions/runs/13311971065/job/37176555490#step:15:11475
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes i also think it did, sorry for the messup. I confirm with a revert to make sure it fixes it. If so lets merge the revert and fix/retry this PR again later.