-
Notifications
You must be signed in to change notification settings - Fork 607
[ET-VK][Ops] aten.convolution (Transpose) #2883
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
## Summary (cases handled) We introduce support for the convolution cases covered by ATen-VK's transpose implementation. This is achieved by - reusing the existing [`conv_transpose2d.glsl`](https://github.com/pytorch/pytorch/blob/09c72eaa3f69f90402c86a30abf4fc621298578c/aten/src/ATen/native/vulkan/glsl/conv_transpose2d.glsl), and - [moving special weights prepacking from CPU](https://github.com/pytorch/pytorch/blob/09c72eaa3f69f90402c86a30abf4fc621298578c/aten/src/ATen/native/vulkan/ops/Convolution.cpp#L134-L235) to the GPU in `conv_transpose2d_prepack_weights.glsl`. We also include resizing support for dynamic shapes. Note that only height and width of the input can vary. ## Cases not handled The implementation is on-par with ATen-VK's Transpose. This means the following cases are missing: 1. **Groups G > 1.** 2. **Batch (input) N > 1.** 3. **Dilation > 1.** Differential Revision: [D55667336](https://our.internmc.facebook.com/intern/diff/D55667336/) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/2883
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 37e6160 with merge base d3326a2 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D55667336 |
## Summary (cases handled) We introduce support for the convolution cases covered by ATen-VK's transpose implementation. This is achieved by - reusing the existing [`conv_transpose2d.glsl`](https://github.com/pytorch/pytorch/blob/09c72eaa3f69f90402c86a30abf4fc621298578c/aten/src/ATen/native/vulkan/glsl/conv_transpose2d.glsl), and - [moving special weights prepacking from CPU](https://github.com/pytorch/pytorch/blob/09c72eaa3f69f90402c86a30abf4fc621298578c/aten/src/ATen/native/vulkan/ops/Convolution.cpp#L134-L235) to the GPU in `conv_transpose2d_prepack_weights.glsl`. We also include resizing support for dynamic shapes. Note that only height and width of the input can vary. ## Cases not handled The implementation is on-par with ATen-VK's Transpose. This means the following cases are missing: 1. **Groups G > 1.** 2. **Batch (input) N > 1.** 3. **Dilation > 1.** Differential Revision: [D55667336](https://our.internmc.facebook.com/intern/diff/D55667336/) [ghstack-poisoned]
This pull request was exported from Phabricator. Differential Revision: D55667336 |
Pull Request resolved: #2883 ## Summary (cases handled) We introduce support for the convolution cases covered by ATen-VK's transpose implementation. This is achieved by - reusing the existing [`conv_transpose2d.glsl`](https://github.com/pytorch/pytorch/blob/09c72eaa3f69f90402c86a30abf4fc621298578c/aten/src/ATen/native/vulkan/glsl/conv_transpose2d.glsl), and - [moving special weights prepacking from CPU](https://github.com/pytorch/pytorch/blob/09c72eaa3f69f90402c86a30abf4fc621298578c/aten/src/ATen/native/vulkan/ops/Convolution.cpp#L134-L235) to the GPU in `conv_transpose2d_prepack_weights.glsl`. We also include resizing support for dynamic shapes. Note that only height and width of the input can vary. ## Cases not handled The implementation is on-par with ATen-VK's Transpose. This means the following cases are missing: 1. **Groups G > 1.** 2. **Batch (input) N > 1.** 3. **Dilation > 1.** ghstack-source-id: 221721754 @exported-using-ghexport Differential Revision: [D55667336](https://our.internmc.facebook.com/intern/diff/D55667336/)
This pull request has been merged in 8a6427e. |
## Summary (cases handled) We introduce support for the convolution cases covered by ATen-VK's transpose implementation. This is achieved by - reusing the existing [`conv_transpose2d.glsl`](https://github.com/pytorch/pytorch/blob/09c72eaa3f69f90402c86a30abf4fc621298578c/aten/src/ATen/native/vulkan/glsl/conv_transpose2d.glsl), and - [moving special weights prepacking from CPU](https://github.com/pytorch/pytorch/blob/09c72eaa3f69f90402c86a30abf4fc621298578c/aten/src/ATen/native/vulkan/ops/Convolution.cpp#L134-L235) to the GPU in `conv_transpose2d_prepack_weights.glsl`. We also include resizing support for dynamic shapes. Note that only height and width of the input can vary. ## Cases not handled The implementation is on-par with ATen-VK's Transpose. This means the following cases are missing: 1. **Groups G > 1.** 2. **Batch (input) N > 1.** 3. **Dilation > 1.** Differential Revision: [D55667336](https://our.internmc.facebook.com/intern/diff/D55667336/) ghstack-source-id: 221526248 Pull Request resolved: pytorch/executorch#2883
Stack from ghstack (oldest at bottom):
Summary (cases handled)
We introduce support for the convolution cases covered by ATen-VK's transpose implementation. This is achieved by
conv_transpose2d.glsl
, andconv_transpose2d_prepack_weights.glsl
.We also include resizing support for dynamic shapes. Note that only height and width of the input can vary.
Cases not handled
The implementation is on-par with ATen-VK's Transpose. This means the following cases are missing:
Differential Revision: D55667336