Skip to content

Commit f678a04

Browse files
Jack-Khuufacebook-github-bot
authored andcommitted
Update yaml syntax to use kernel instead of dispatch
Summary: X-link: pytorch/pytorch#104070 Based on this [code search](https://fburl.com/code/gjcnw8ly) (*.yaml with `dispatch: CPU:`), update all files found to use ``` kernels: - arg_meta: null kernel_name: ``` instead of ``` dispatch: CPU: ``` --- ## Code changes: - `fbcode/executorch/codegen/tools/gen_oplist.py` - Strip ET specific fields prior to calling parse_native_yaml_struct --- ## Files edited that are not `*functions.yaml` or `custom_ops.yaml` - fbcode/executorch/kernels/optimized/optimized.yaml - fbcode/executorch/kernels/quantized/quantized.yaml - fbcode/executorch/kernels/test/custom_kernel_example/my_functions.yaml --- ## Found Files that were not edited **Dispatched to more than just CPU** - fbcode/caffe2/aten/src/ATen/native/native_functions.yaml - xplat/caffe2/aten/src/ATen/native/native_functions.yaml - xros/third-party/caffe2/caffe2/aten/src/ATen/native/native_functions.yaml **Grouped ops.yaml path** - fbcode/on_device_ai/Assistant/Jarvis/min_runtime/operators/ops.yaml --- **Design Doc:** https://docs.google.com/document/d/1gq4Wz2R6verKJ2EFseLyPdAF0wqomnCrVDDJpRkYsRw/edit?kh_source=GDOCS#heading=h.8raqyft9y50 ghstack-source-id: 193204305 Reviewed By: larryliu0820 Differential Revision: D46952067 fbshipit-source-id: 79528bb751faa489fbde99a01c1950281e94045d
1 parent afffe5a commit f678a04

File tree

6 files changed

+443
-292
lines changed

6 files changed

+443
-292
lines changed

codegen/tools/gen_oplist.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@
55
from typing import Any, Dict, List, Optional, Set
66

77
import yaml
8+
from torchgen.executorch.parse import strip_et_fields
89

910
from torchgen.gen import LineLoader, parse_native_yaml_struct
1011
from torchgen.selective_build.operator import SelectiveBuildOperator
@@ -139,6 +140,7 @@ def _get_et_kernel_metadata_from_ops_yaml(ops_yaml_path: str) -> Dict[str, List[
139140
ops.append(("aten::" if "::" not in e.get("op") else "") + e.get("op"))
140141
else:
141142
func_entries.append(e)
143+
strip_et_fields(es)
142144
parsed_yaml = parse_native_yaml_struct(
143145
func_entries, set(), None, path=ops_yaml_path, skip_native_fns_gen=True
144146
)

kernels/optimized/optimized.yaml

Lines changed: 24 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -3,33 +3,41 @@
33
# This yaml file contains operators that have optimized kernels available.
44

55
- op: _log_softmax.out
6-
dispatch:
7-
CPU: torch::executor::opt_log_softmax_out
6+
kernels:
7+
- arg_meta: null
8+
kernel_name: torch::executor::opt_log_softmax_out
89

910
- op: bmm.out
10-
dispatch:
11-
CPU: torch::executor::opt_bmm_out
11+
kernels:
12+
- arg_meta: null
13+
kernel_name: torch::executor::opt_bmm_out
1214

1315
- op: exp.out
14-
dispatch:
15-
CPU: torch::executor::opt_exp_out
16+
kernels:
17+
- arg_meta: null
18+
kernel_name: torch::executor::opt_exp_out
1619

1720
- op: gelu.out
18-
dispatch:
19-
CPU: torch::executor::opt_gelu_out
21+
kernels:
22+
- arg_meta: null
23+
kernel_name: torch::executor::opt_gelu_out
2024

2125
- op: le.Scalar_out
22-
dispatch:
23-
CPU: torch::executor::opt_le_scalar_out
26+
kernels:
27+
- arg_meta: null
28+
kernel_name: torch::executor::opt_le_scalar_out
2429

2530
- op: le.Tensor_out
26-
dispatch:
27-
CPU: torch::executor::opt_le_tensor_out
31+
kernels:
32+
- arg_meta: null
33+
kernel_name: torch::executor::opt_le_tensor_out
2834

2935
- op: native_layer_norm.out
30-
dispatch:
31-
CPU: torch::executor::opt_native_layer_norm_out
36+
kernels:
37+
- arg_meta: null
38+
kernel_name: torch::executor::opt_native_layer_norm_out
3239

3340
- op: neg.out
34-
dispatch:
35-
CPU: torch::executor::opt_neg_out
41+
kernels:
42+
- arg_meta: null
43+
kernel_name: torch::executor::opt_neg_out

kernels/portable/custom_ops.yaml

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -16,8 +16,9 @@
1616
# TODO(T126667800) Remove dummy_param once custom namespaces are supported in the
1717
# portable op library.
1818
- func: allclose.out(Tensor self, Tensor other, float rtol=1e-05, float atol=1e-08, bool equal_nan=False, bool dummy_param=False, *, Tensor(a!) out) -> Tensor(a!)
19-
dispatch:
20-
CPU: torch::executor::allclose_out
19+
kernels:
20+
- arg_meta: null
21+
kernel_name: torch::executor::allclose_out
2122

2223
# The argument dummy_param is used solely to disambiguate this op from the native
2324
# allclose(). Otherwise, code calling this op is identical to the native op:
@@ -26,9 +27,11 @@
2627
# TODO(T126667800) Remove dummy_param once custom namespaces are supported in the
2728
# portable op library.
2829
- func: allclose.Tensor(Tensor self, Tensor other, float rtol=1e-05, float atol=1e-08, bool equal_nan=False, bool dummy_param=False) -> Tensor
29-
dispatch:
30-
CPU: torch::executor::allclose_tensor
30+
kernels:
31+
- arg_meta: null
32+
kernel_name: torch::executor::allclose_tensor
3133

3234
- func: linear.scratch_example(Tensor input, Tensor weight, Tensor? bias=None, *, Tensor(a!) out, Tensor(b!) _scratch_tensor) -> Tensor(a!)
33-
dispatch:
34-
CPU: torch::executor::linear_scratch_example
35+
kernels:
36+
- arg_meta: null
37+
kernel_name: torch::executor::linear_scratch_example

0 commit comments

Comments
 (0)