Skip to content

Commit 6cb59de

Browse files
mcremon-metafacebook-github-bot
authored andcommitted
Allow int8 type in quantized_layer_norm (#5899)
Summary: The meta kernel was returning uint8 as a hardcoded value. Reviewed By: zonglinpeng Differential Revision: D63659948
1 parent 8cd57c2 commit 6cb59de

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

backends/cadence/aot/ops_registrations.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -164,7 +164,7 @@ def quantized_layer_norm_meta(
164164
output_scale: float,
165165
output_zero_point: int,
166166
) -> torch.Tensor:
167-
return input.new_empty(input.size(), dtype=torch.uint8)
167+
return input.new_empty(input.size(), dtype=input.dtype)
168168

169169

170170
@register_fake("cadence::quantized_relu")

0 commit comments

Comments
 (0)