-
Notifications
You must be signed in to change notification settings - Fork 607
Automatically generate operator tests #2754
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/2754
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 9ad7c53 with merge base 764c353 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D55446638 |
ed58cac
to
b2104e6
Compare
Summary: ## Context One of the most time consuming parts of adding new operators is writing tests to verify that the implementation is correct. This changeset introduces a codegen solution for automatically generating tests. The goal is to introduce a simple interface to specify what inputs an operator should be checked with, and have a 1 button solution for generating the code and executing operator tests. ## Usage Overview From the developer's perspective, they only need to interact with `op_tests/cases.py`. The file is very simple: ``` # Prime numbers dim sizes for testing XL = 113 L = 89 M2 = 41 M1 = 37 M = 29 S2 = 11 S1 = 7 S = 5 XS = 3 ... def get_mm_inputs(): return [ ((M1, L), (L, M2)), ((S1, S2), (S2, M)), ] test_cases = { "aten.add.Tensor": get_binary_elementwise_inputs(), "aten.sub.Tensor": get_binary_elementwise_inputs(), "aten.div.Tensor": get_binary_elementwise_inputs(), "aten.mul.Tensor": get_binary_elementwise_inputs(), "aten.mm.default": get_mm_inputs(), } ``` It just contains a mapping from the name an operator is registered under in the operator registry to a list of inputs for which tests should be generated. To generate and run tests: ``` buck run //xplat/executorch/backends/vulkan/test/op_tests:compute_graph_op_tests_bin ``` ## Design Overview The code generation is mostly built on top of [torchgen](https://github.com/pytorch/pytorch/tree/main/torchgen), which is PyTorch's codegen system for parsing [native_function.yaml](https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/native_functions.yaml) and generating C++ ATen functions from it. The basic idea is: 1. Using the operator registry name, find the corresponding native function for native_function.yaml 2. Use the function schema from the parsed native function to generate test fixtures that can build a Vulkan compute graph for the operator 3. Individual test cases can be generated by creating ATen tensors and calling the ATen operator to get a reference output, then using the test fixture to get a Vulkan output and compare it to the reference output. 4. GTest [test parameterization](https://github.com/google/googletest/blob/main/googletest/samples/sample8_unittest.cc) is used to test each test case under a combination of dtypes, storage types, and memory layout [Example generated cpp](https://www.internalfb.com/phabricator/paste/view/1201406551) Differential Revision: D55446638
This pull request was exported from Phabricator. Differential Revision: D55446638 |
Summary: ## Context One of the most time consuming parts of adding new operators is writing tests to verify that the implementation is correct. This changeset introduces a codegen solution for automatically generating tests. The goal is to introduce a simple interface to specify what inputs an operator should be checked with, and have a 1 button solution for generating the code and executing operator tests. ## Usage Overview From the developer's perspective, they only need to interact with `op_tests/cases.py`. The file is very simple: ``` # Prime numbers dim sizes for testing XL = 113 L = 89 M2 = 41 M1 = 37 M = 29 S2 = 11 S1 = 7 S = 5 XS = 3 ... def get_mm_inputs(): return [ ((M1, L), (L, M2)), ((S1, S2), (S2, M)), ] test_cases = { "aten.add.Tensor": get_binary_elementwise_inputs(), "aten.sub.Tensor": get_binary_elementwise_inputs(), "aten.div.Tensor": get_binary_elementwise_inputs(), "aten.mul.Tensor": get_binary_elementwise_inputs(), "aten.mm.default": get_mm_inputs(), } ``` It just contains a mapping from the name an operator is registered under in the operator registry to a list of inputs for which tests should be generated. To generate and run tests: ``` buck run //xplat/executorch/backends/vulkan/test/op_tests:compute_graph_op_tests_bin ``` ## Design Overview The code generation is mostly built on top of [torchgen](https://github.com/pytorch/pytorch/tree/main/torchgen), which is PyTorch's codegen system for parsing [native_function.yaml](https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/native_functions.yaml) and generating C++ ATen functions from it. The basic idea is: 1. Using the operator registry name, find the corresponding native function for native_function.yaml 2. Use the function schema from the parsed native function to generate test fixtures that can build a Vulkan compute graph for the operator 3. Individual test cases can be generated by creating ATen tensors and calling the ATen operator to get a reference output, then using the test fixture to get a Vulkan output and compare it to the reference output. 4. GTest [test parameterization](https://github.com/google/googletest/blob/main/googletest/samples/sample8_unittest.cc) is used to test each test case under a combination of dtypes, storage types, and memory layout [Example generated cpp](https://www.internalfb.com/phabricator/paste/view/1201406551) Differential Revision: D55446638
b2104e6
to
1bc55ff
Compare
This pull request was exported from Phabricator. Differential Revision: D55446638 |
Summary: ## Context One of the most time consuming parts of adding new operators is writing tests to verify that the implementation is correct. This changeset introduces a codegen solution for automatically generating tests. The goal is to introduce a simple interface to specify what inputs an operator should be checked with, and have a 1 button solution for generating the code and executing operator tests. ## Usage Overview From the developer's perspective, they only need to interact with `op_tests/cases.py`. The file is very simple: ``` # Prime numbers dim sizes for testing XL = 113 L = 89 M2 = 41 M1 = 37 M = 29 S2 = 11 S1 = 7 S = 5 XS = 3 ... def get_mm_inputs(): return [ ((M1, L), (L, M2)), ((S1, S2), (S2, M)), ] test_cases = { "aten.add.Tensor": get_binary_elementwise_inputs(), "aten.sub.Tensor": get_binary_elementwise_inputs(), "aten.div.Tensor": get_binary_elementwise_inputs(), "aten.mul.Tensor": get_binary_elementwise_inputs(), "aten.mm.default": get_mm_inputs(), } ``` It just contains a mapping from the name an operator is registered under in the operator registry to a list of inputs for which tests should be generated. To generate and run tests: ``` buck run //xplat/executorch/backends/vulkan/test/op_tests:compute_graph_op_tests_bin ``` ## Design Overview The code generation is mostly built on top of [torchgen](https://github.com/pytorch/pytorch/tree/main/torchgen), which is PyTorch's codegen system for parsing [native_function.yaml](https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/native_functions.yaml) and generating C++ ATen functions from it. The basic idea is: 1. Using the operator registry name, find the corresponding native function for native_function.yaml 2. Use the function schema from the parsed native function to generate test fixtures that can build a Vulkan compute graph for the operator 3. Individual test cases can be generated by creating ATen tensors and calling the ATen operator to get a reference output, then using the test fixture to get a Vulkan output and compare it to the reference output. 4. GTest [test parameterization](https://github.com/google/googletest/blob/main/googletest/samples/sample8_unittest.cc) is used to test each test case under a combination of dtypes, storage types, and memory layout [Example generated cpp](https://www.internalfb.com/phabricator/paste/view/P1202279441) Differential Revision: D55446638
1bc55ff
to
2449527
Compare
This pull request was exported from Phabricator. Differential Revision: D55446638 |
Summary: ## Context One of the most time consuming parts of adding new operators is writing tests to verify that the implementation is correct. This changeset introduces a codegen solution for automatically generating tests. The goal is to introduce a simple interface to specify what inputs an operator should be checked with, and have a 1 button solution for generating the code and executing operator tests. ## Usage Overview From the developer's perspective, they only need to interact with `op_tests/cases.py`. The file is very simple: ``` # Prime numbers dim sizes for testing XL = 113 L = 89 M2 = 41 M1 = 37 M = 29 S2 = 11 S1 = 7 S = 5 XS = 3 ... def get_mm_inputs(): return [ ((M1, L), (L, M2)), ((S1, S2), (S2, M)), ] test_cases = { "aten.add.Tensor": get_binary_elementwise_inputs(), "aten.sub.Tensor": get_binary_elementwise_inputs(), "aten.div.Tensor": get_binary_elementwise_inputs(), "aten.mul.Tensor": get_binary_elementwise_inputs(), "aten.mm.default": get_mm_inputs(), } ``` It just contains a mapping from the name an operator is registered under in the operator registry to a list of inputs for which tests should be generated. To generate and run tests: ``` buck run //xplat/executorch/backends/vulkan/test/op_tests:compute_graph_op_tests_bin ``` ## Design Overview The code generation is mostly built on top of [torchgen](https://github.com/pytorch/pytorch/tree/main/torchgen), which is PyTorch's codegen system for parsing [native_function.yaml](https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/native_functions.yaml) and generating C++ ATen functions from it. The basic idea is: 1. Using the operator registry name, find the corresponding native function for native_function.yaml 2. Use the function schema from the parsed native function to generate test fixtures that can build a Vulkan compute graph for the operator 3. Individual test cases can be generated by creating ATen tensors and calling the ATen operator to get a reference output, then using the test fixture to get a Vulkan output and compare it to the reference output. 4. GTest [test parameterization](https://github.com/google/googletest/blob/main/googletest/samples/sample8_unittest.cc) is used to test each test case under a combination of dtypes, storage types, and memory layout [Example generated cpp](https://www.internalfb.com/phabricator/paste/view/P1202279441) Differential Revision: D55446638
2449527
to
9ad7c53
Compare
This pull request was exported from Phabricator. Differential Revision: D55446638 |
This pull request has been merged in d4b3e5c. |
Summary: Pull Request resolved: pytorch#2754 ## Context One of the most time consuming parts of adding new operators is writing tests to verify that the implementation is correct. This changeset introduces a codegen solution for automatically generating tests. The goal is to introduce a simple interface to specify what inputs an operator should be checked with, and have a 1 button solution for generating the code and executing operator tests. ## Usage Overview From the developer's perspective, they only need to interact with `op_tests/cases.py`. The file is very simple: ``` # Prime numbers dim sizes for testing XL = 113 L = 89 M2 = 41 M1 = 37 M = 29 S2 = 11 S1 = 7 S = 5 XS = 3 ... def get_mm_inputs(): return [ ((M1, L), (L, M2)), ((S1, S2), (S2, M)), ] test_cases = { "aten.add.Tensor": get_binary_elementwise_inputs(), "aten.sub.Tensor": get_binary_elementwise_inputs(), "aten.div.Tensor": get_binary_elementwise_inputs(), "aten.mul.Tensor": get_binary_elementwise_inputs(), "aten.mm.default": get_mm_inputs(), } ``` It just contains a mapping from the name an operator is registered under in the operator registry to a list of inputs for which tests should be generated. To generate and run tests: ``` buck run //xplat/executorch/backends/vulkan/test/op_tests:compute_graph_op_tests_bin ``` ## Design Overview The code generation is mostly built on top of [torchgen](https://github.com/pytorch/pytorch/tree/main/torchgen), which is PyTorch's codegen system for parsing [native_function.yaml](https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/native_functions.yaml) and generating C++ ATen functions from it. The basic idea is: 1. Using the operator registry name, find the corresponding native function for native_function.yaml 2. Use the function schema from the parsed native function to generate test fixtures that can build a Vulkan compute graph for the operator 3. Individual test cases can be generated by creating ATen tensors and calling the ATen operator to get a reference output, then using the test fixture to get a Vulkan output and compare it to the reference output. 4. GTest [test parameterization](https://github.com/google/googletest/blob/main/googletest/samples/sample8_unittest.cc) is used to test each test case under a combination of dtypes, storage types, and memory layout [Example generated cpp](https://www.internalfb.com/phabricator/paste/view/P1202279441) Reviewed By: copyrightly Differential Revision: D55446638 fbshipit-source-id: 93ca8e7cd43cee1e2678c489d6f2227507ef256f
Summary:
Context
One of the most time consuming parts of adding new operators is writing tests to verify that the implementation is correct. This changeset introduces a codegen solution for automatically generating tests. The goal is to introduce a simple interface to specify what inputs an operator should be checked with, and have a 1 button solution for generating the code and executing operator tests.
Usage Overview
From the developer's perspective, they only need to interact with
op_tests/cases.py
. The file is very simple:It just contains a mapping from the name an operator is registered under in the operator registry to a list of inputs for which tests should be generated.
To generate and run tests:
Design Overview
The code generation is mostly built on top of torchgen, which is PyTorch's codegen system for parsing native_function.yaml and generating C++ ATen functions from it. The basic idea is:
Example generated cpp
Differential Revision: D55446638