Skip to content

Commit aaba33d

Browse files
shoumikhinfacebook-github-bot
authored andcommitted
Fix op schema and avoid re-registering.
Summary: Looks like in op schema declaration the special `*` arg should separate in- and out-args, so rearranging them a bit. Additionally, skip lininking the runner with `examples/models/llama/ops`in favor of `kernels/quantized`. bypass-github-export-checks Reviewed By: mikekgfb Differential Revision: D54523778 fbshipit-source-id: ed6750c75369f90d4a1469d18f2a9554d93f806a
1 parent 49fb74b commit aaba33d

File tree

2 files changed

+2
-2
lines changed

2 files changed

+2
-2
lines changed

examples/models/llama2/ops/quantized.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
- arg_meta: null
55
kernel_name: torch::executor::quantized_embedding_byte_out
66

7-
- func: llama_quantized::embedding_byte.dtype_out(Tensor weight, Tensor weight_scales, Tensor? weight_zero_points, int weight_quant_min, int weight_quant_max, Tensor indices, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!)
7+
- func: llama_quantized::embedding_byte.dtype_out(Tensor weight, Tensor weight_scales, Tensor? weight_zero_points, int weight_quant_min, int weight_quant_max, Tensor indices, ScalarType? dtype=None, *, Tensor(a!) out) -> Tensor(a!)
88
variants: function
99
kernels:
1010
- arg_meta: null

kernels/quantized/quantized.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@
4040
- arg_meta: null
4141
kernel_name: torch::executor::quantized_embedding_byte_out
4242

43-
- func: quantized_decomposed::embedding_byte.dtype_out(Tensor weight, Tensor weight_scales, Tensor? weight_zero_points, int weight_quant_min, int weight_quant_max, Tensor indices, *, ScalarType? dtype=None, Tensor(a!) out) -> Tensor(a!)
43+
- func: quantized_decomposed::embedding_byte.dtype_out(Tensor weight, Tensor weight_scales, Tensor? weight_zero_points, int weight_quant_min, int weight_quant_max, Tensor indices, ScalarType? dtype=None, *, Tensor(a!) out) -> Tensor(a!)
4444
variants: function
4545
kernels:
4646
- arg_meta: null

0 commit comments

Comments
 (0)