-
Notifications
You must be signed in to change notification settings - Fork 607
[ET-VK][Test] aten.max_pool2d_with_indices #2548
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Due to the below issues, we only check equality of the output tensor and not the index tensor. 1. We can't verify index tensors since VK-float16 vs CPU-float32 deltas can change which index in a pool is the maximum. That can yield completely different integers in the index tensor. Hence, we only verify the output tensor not the index tensor. 2. To actually visualize the index tensor, we need to re-construct the int32 values from the int64 values. Since the `torch.int64` index tensor is serialized as `int32` in Vulkan, Python expects int64 but C++ writes to the buffer as though it is for int32. Hence, we must apply some computation to re-construct the tensor. See below for details. A helper function was included in an earlier version of this change, but was removed for conciseness since we aren't checking that index tensor anyway. For example, if the first and second elements return 16 and 17, we get this value as the first element: ``` 73014444048 = 1000100000000000000000000000000010000 ``` We must split this int64 into two int32 values, and construct a new tensor accordingly. ``` 10001 | 00000000000000000000000000010000 10001 | 10000 17 | 16 ``` Differential Revision: [D54962492](https://our.internmc.facebook.com/intern/diff/D54962492/) [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/2548
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit bb26fea with merge base 80e3989 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D54962492 |
Due to the below issues, we only check equality of the output tensor and not the index tensor. 1. We can't verify index tensors since VK-float16 vs CPU-float32 deltas can change which index in a pool is the maximum. That can yield completely different integers in the index tensor. Hence, we only verify the output tensor not the index tensor. 2. To actually visualize the index tensor, we need to re-construct the int32 values from the int64 values. Since the `torch.int64` index tensor is serialized as `int32` in Vulkan, Python expects int64 but C++ writes to the buffer as though it is for int32. Hence, we must apply some computation to re-construct the tensor. See below for details. A helper function was included in an earlier version of this change, but was removed for conciseness since we aren't checking that index tensor anyway. For example, if the first and second elements return 16 and 17, we get this value as the first element: ``` 73014444048 = 1000100000000000000000000000000010000 ``` We must split this int64 into two int32 values, and construct a new tensor accordingly. ``` 10001 | 00000000000000000000000000010000 10001 | 10000 17 | 16 ``` Differential Revision: [D54962492](https://our.internmc.facebook.com/intern/diff/D54962492/) ghstack-source-id: 219490380 Pull Request resolved: #2548
Due to the below issues, we only check equality of the output tensor and not the index tensor. 1. We can't verify index tensors since VK-float16 vs CPU-float32 deltas can change which index in a pool is the maximum. That can yield completely different integers in the index tensor. Hence, we only verify the output tensor not the index tensor. 2. To actually visualize the index tensor, we need to re-construct the int32 values from the int64 values. Since the `torch.int64` index tensor is serialized as `int32` in Vulkan, Python expects int64 but C++ writes to the buffer as though it is for int32. Hence, we must apply some computation to re-construct the tensor. See below for details. A helper function was included in an earlier version of this change, but was removed for conciseness since we aren't checking that index tensor anyway. For example, if the first and second elements return 16 and 17, we get this value as the first element: ``` 73014444048 = 1000100000000000000000000000000010000 ``` We must split this int64 into two int32 values, and construct a new tensor accordingly. ``` 10001 | 00000000000000000000000000010000 10001 | 10000 17 | 16 ``` Differential Revision: [D54962492](https://our.internmc.facebook.com/intern/diff/D54962492/) [ghstack-poisoned]
This pull request was exported from Phabricator. Differential Revision: D54962492 |
Pull Request resolved: #2548 Due to the below issues, we only check equality of the output tensor and not the index tensor. 1. We can't verify index tensors since VK-float16 vs CPU-float32 deltas can change which index in a pool is the maximum. That can yield completely different integers in the index tensor. Hence, we only verify the output tensor not the index tensor. 2. To actually visualize the index tensor, we need to re-construct the int32 values from the int64 values. Since the `torch.int64` index tensor is serialized as `int32` in Vulkan, Python expects int64 but C++ writes to the buffer as though it is for int32. Hence, we must apply some computation to re-construct the tensor. See below for details. A helper function was included in an earlier version of this change, but was removed for conciseness since we aren't checking that index tensor anyway. For example, if the first and second elements return 16 and 17, we get this value as the first element: ``` 73014444048 = 1000100000000000000000000000000010000 ``` We must split this int64 into two int32 values, and construct a new tensor accordingly. ``` 10001 | 00000000000000000000000000010000 10001 | 10000 17 | 16 ``` ghstack-source-id: 219491401 @exported-using-ghexport Differential Revision: [D54962492](https://our.internmc.facebook.com/intern/diff/D54962492/)
Due to the below issues, we only check equality of the output tensor and not the index tensor. 1. We can't verify index tensors since VK-float16 vs CPU-float32 deltas can change which index in a pool is the maximum. That can yield completely different integers in the index tensor. Hence, we only verify the output tensor not the index tensor. 2. To actually visualize the index tensor, we need to re-construct the int32 values from the int64 values. Since the `torch.int64` index tensor is serialized as `int32` in Vulkan, Python expects int64 but C++ writes to the buffer as though it is for int32. Hence, we must apply some computation to re-construct the tensor. See below for details. A helper function was included in an earlier version of this change, but was removed for conciseness since we aren't checking that index tensor anyway. For example, if the first and second elements return 16 and 17, we get this value as the first element: ``` 73014444048 = 1000100000000000000000000000000010000 ``` We must split this int64 into two int32 values, and construct a new tensor accordingly. ``` 10001 | 00000000000000000000000000010000 10001 | 10000 17 | 16 ``` Differential Revision: [D54962492](https://our.internmc.facebook.com/intern/diff/D54962492/) [ghstack-poisoned]
This pull request was exported from Phabricator. Differential Revision: D54962492 |
Pull Request resolved: #2548 Due to the below issues, we only check equality of the output tensor and not the index tensor. 1. We can't verify index tensors since VK-float16 vs CPU-float32 deltas can change which index in a pool is the maximum. That can yield completely different integers in the index tensor. Hence, we only verify the output tensor not the index tensor. 2. To actually visualize the index tensor, we need to re-construct the int32 values from the int64 values. Since the `torch.int64` index tensor is serialized as `int32` in Vulkan, Python expects int64 but C++ writes to the buffer as though it is for int32. Hence, we must apply some computation to re-construct the tensor. See below for details. A helper function was included in an earlier version of this change, but was removed for conciseness since we aren't checking that index tensor anyway. For example, if the first and second elements return 16 and 17, we get this value as the first element: ``` 73014444048 = 1000100000000000000000000000000010000 ``` We must split this int64 into two int32 values, and construct a new tensor accordingly. ``` 10001 | 00000000000000000000000000010000 10001 | 10000 17 | 16 ``` ghstack-source-id: 219506266 @exported-using-ghexport Differential Revision: [D54962492](https://our.internmc.facebook.com/intern/diff/D54962492/)
This pull request has been merged in a41ac1c. |
Stack from ghstack (oldest at bottom):
Due to the below issues, we only check equality of the output tensor and not the index tensor.
torch.int64
index tensor is serialized asint32
in Vulkan, Python expects int64 but C++ writes to the buffer as though it is for int32. Hence, we must apply some computation to re-construct the tensor. See below for details. A helper function was included in an earlier version of this change, but was removed for conciseness since we aren't checking that index tensor anyway.For example, if the first and second elements return 16 and 17, we get this value as the first element:
We must split this int64 into two int32 values, and construct a new tensor accordingly.
Differential Revision: D54962492