Skip to content

Commit 42ff569

Browse files
Arm backend: Don't set requires_grad in Arm-backend fold_qdq-pass (#8027)
It was previously manually set to True for all placeholders as a workaround for an issue where some params did not have requires_grad properly set. This caused issues for placeholders which were not leaf variables, and since the work around is not needed anyore we can just remove it.
1 parent 3eea1f1 commit 42ff569

File tree

1 file changed

+0
-15
lines changed

1 file changed

+0
-15
lines changed

backends/arm/_passes/fold_qdq_with_annotated_qparams_pass.py

Lines changed: 0 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -105,21 +105,6 @@ def fold_and_annotate_arg(
105105
for arg in arg_list:
106106
if not isinstance(arg, Node):
107107
return
108-
"""
109-
Make sure arg has requires_grad set to False
110-
For parameters that are not quantized, sometimes (i.e. convolution)
111-
the Parameter(FakeTensor(...)) has requires_grad set to True, which
112-
causes the retracing of the graph to fail with:
113-
114-
E RuntimeError: isDifferentiableType(variable.scalar_type()) INTERNAL ASSERT FAILED at "/Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/autograd/functions/utils.h":74, please report a bug to PyTorch.
115-
E
116-
E While executing %aten_convolution_default : [num_users=1] = call_function[target=executorch.exir.dialects.edge._ops.aten.convolution.default](args = (%quantized_decomposed_quantize_per_tensor_default, %b__frozen_param0, %p__param_constant1, [1, 1], [0, 0], [1, 1], False, [0, 0], 1), kwargs = {})
117-
E Original traceback:
118-
E File "/Users/perast01/src/executorch/backends/arm/test/ops/test_conv2d.py", line 110, in forward
119-
E x = conv(x)
120-
"""
121-
if arg.op == "placeholder":
122-
arg.meta["val"].requires_grad = False
123108

124109
arg_quant_params = None
125110
if arg.target == dq_op:

0 commit comments

Comments
 (0)