-
Notifications
You must be signed in to change notification settings - Fork 608
Add div decomposition in ArmQuantizer #5267
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/5267
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 2 Unrelated FailuresAs of commit 104d3a8 with merge base 20a157f ( NEW FAILURE - The following job has failed:
BROKEN TRUNK - The following jobs failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@@ -45,3 +50,8 @@ def transform_to_backend_pipeline( | |||
self.add_pass(AnnotateChannelsLastDimOrder()) | |||
|
|||
return self._transform(graph_module) | |||
|
|||
def transform_for_annotation_pipeline(self, graph_module: torch.fx.GraphModule): | |||
self.add_pass(DecomposeDivPass()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess the motivation for this API is clear and this is allowed, however I do want to make sure modifying graph in the Quantizer is something we are OK with from general ET lowering point of view. I also see SDPA as a valid use case.
cc @kimishpatel, @mergennachin - any comments?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok - I'll resolve the merge conflict and mark this as ready for review tomorrow unless I get new comments.
2edce3d
to
af8f22b
Compare
@digantdesai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@digantdesai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
still failing with |
124fc23
to
cd6dea4
Compare
I was not able to find this error in the logs of the failed jobs here on GH and could not reproduce this locally but I rebased and forced-push and now only one job fails from what I can see |
@digantdesai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
|
||
if is_quant_node: | ||
input = inputs[0] | ||
input_qargs = get_quant_node_args(node.args[0]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
New internal error
executorch/backends/arm/operators/op_reciprocal.py:44:46 Incompatible parameter type [6]: In call get_quant_node_args
, for 1st positional argument, expected Node
but got Union[None, List[typing.Any], Dict[str, typing.Any], bool, complex, float, int, range, slice, str, SymBool, SymFloat, SymInt, device, dtype, layout, memory_format, OpOverload, Tensor, Node, typing.Tuple[typing.Any, ...]]
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I assume this is due to some internal extra type checker? I realized that get_quant_node_args was called differently than in other node visitors, hopefully fixing that solves the error.
Also rebase when you fix this 😓 |
Implements node visitor and tests Quantized by one_to_one annotator Signed-off-by: Erik Lundell <[email protected]> Change-Id: I3f25096a8b908d7c25b6cd83bd7edb6871b145ab
Implements pass that decomposes aten.div to aten.reciprocal and aten.mul. This is done in the Quantizer get quantization annotation on the decomposed operators. Add infra for passes in ArmQuantizer Signed-off-by: Erik Lundell <[email protected]> Change-Id: Idd1698dc5fc82ab42b68094b405fb3a08804a45e
@digantdesai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@digantdesai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@digantdesai merged this pull request in c5fdebd. |
Suggestion or what a pass to decompose a div in ArmQuantizer might look like.