Skip to content

[ET-VK][Test] aten.max_pool2d_with_indices #2548

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from

Conversation

jorgep31415
Copy link
Contributor

@jorgep31415 jorgep31415 commented Mar 20, 2024

Stack from ghstack (oldest at bottom):

Due to the below issues, we only check equality of the output tensor and not the index tensor.

  1. We can't verify index tensors since VK-float16 vs CPU-float32 deltas can change which index in a pool is the maximum. That can yield completely different integers in the index tensor. Hence, we only verify the output tensor not the index tensor.
  2. To actually visualize the index tensor, we need to re-construct the int32 values from the int64 values. Since the torch.int64 index tensor is serialized as int32 in Vulkan, Python expects int64 but C++ writes to the buffer as though it is for int32. Hence, we must apply some computation to re-construct the tensor. See below for details. A helper function was included in an earlier version of this change, but was removed for conciseness since we aren't checking that index tensor anyway.

For example, if the first and second elements return 16 and 17, we get this value as the first element:

73014444048 = 1000100000000000000000000000000010000

We must split this int64 into two int32 values, and construct a new tensor accordingly.

10001 | 00000000000000000000000000010000
10001 | 10000
17 | 16

Differential Revision: D54962492

Due to the below issues, we only check equality of the output tensor and not the index tensor.

1. We can't verify index tensors since VK-float16 vs CPU-float32 deltas can change which index in a pool is the maximum. That can yield completely different integers in the index tensor. Hence, we only verify the output tensor not the index tensor.
2. To actually visualize the index tensor, we need to re-construct the int32 values from the int64 values. Since the `torch.int64` index tensor is serialized as `int32` in Vulkan, Python expects int64 but C++ writes to the buffer as though it is for int32. Hence, we must apply some computation to re-construct the tensor. See below for details. A helper function was included in an earlier version of this change, but was removed for conciseness since we aren't checking that index tensor anyway.

For example, if the first and second elements return 16 and 17, we get this value as the first element:
```
73014444048 = 1000100000000000000000000000000010000
```
We must split this int64 into two int32 values, and construct a new tensor accordingly.
```
10001 | 00000000000000000000000000010000
10001 | 10000
17 | 16
```

Differential Revision: [D54962492](https://our.internmc.facebook.com/intern/diff/D54962492/)

[ghstack-poisoned]
Copy link

pytorch-bot bot commented Mar 20, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/2548

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit bb26fea with merge base 80e3989 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Mar 20, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54962492

jorgep31415 added a commit that referenced this pull request Mar 20, 2024
Due to the below issues, we only check equality of the output tensor and not the index tensor.

1. We can't verify index tensors since VK-float16 vs CPU-float32 deltas can change which index in a pool is the maximum. That can yield completely different integers in the index tensor. Hence, we only verify the output tensor not the index tensor.
2. To actually visualize the index tensor, we need to re-construct the int32 values from the int64 values. Since the `torch.int64` index tensor is serialized as `int32` in Vulkan, Python expects int64 but C++ writes to the buffer as though it is for int32. Hence, we must apply some computation to re-construct the tensor. See below for details. A helper function was included in an earlier version of this change, but was removed for conciseness since we aren't checking that index tensor anyway.

For example, if the first and second elements return 16 and 17, we get this value as the first element:
```
73014444048 = 1000100000000000000000000000000010000
```
We must split this int64 into two int32 values, and construct a new tensor accordingly.
```
10001 | 00000000000000000000000000010000
10001 | 10000
17 | 16
```

Differential Revision: [D54962492](https://our.internmc.facebook.com/intern/diff/D54962492/)

ghstack-source-id: 219490380
Pull Request resolved: #2548
Due to the below issues, we only check equality of the output tensor and not the index tensor.

1. We can't verify index tensors since VK-float16 vs CPU-float32 deltas can change which index in a pool is the maximum. That can yield completely different integers in the index tensor. Hence, we only verify the output tensor not the index tensor.
2. To actually visualize the index tensor, we need to re-construct the int32 values from the int64 values. Since the `torch.int64` index tensor is serialized as `int32` in Vulkan, Python expects int64 but C++ writes to the buffer as though it is for int32. Hence, we must apply some computation to re-construct the tensor. See below for details. A helper function was included in an earlier version of this change, but was removed for conciseness since we aren't checking that index tensor anyway.

For example, if the first and second elements return 16 and 17, we get this value as the first element:
```
73014444048 = 1000100000000000000000000000000010000
```
We must split this int64 into two int32 values, and construct a new tensor accordingly.
```
10001 | 00000000000000000000000000010000
10001 | 10000
17 | 16
```

Differential Revision: [D54962492](https://our.internmc.facebook.com/intern/diff/D54962492/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54962492

jorgep31415 added a commit that referenced this pull request Mar 20, 2024
Pull Request resolved: #2548

Due to the below issues, we only check equality of the output tensor and not the index tensor.

1. We can't verify index tensors since VK-float16 vs CPU-float32 deltas can change which index in a pool is the maximum. That can yield completely different integers in the index tensor. Hence, we only verify the output tensor not the index tensor.
2. To actually visualize the index tensor, we need to re-construct the int32 values from the int64 values. Since the `torch.int64` index tensor is serialized as `int32` in Vulkan, Python expects int64 but C++ writes to the buffer as though it is for int32. Hence, we must apply some computation to re-construct the tensor. See below for details. A helper function was included in an earlier version of this change, but was removed for conciseness since we aren't checking that index tensor anyway.

For example, if the first and second elements return 16 and 17, we get this value as the first element:
```
73014444048 = 1000100000000000000000000000000010000
```
We must split this int64 into two int32 values, and construct a new tensor accordingly.
```
10001 | 00000000000000000000000000010000
10001 | 10000
17 | 16
```
ghstack-source-id: 219491401
@exported-using-ghexport

Differential Revision: [D54962492](https://our.internmc.facebook.com/intern/diff/D54962492/)
Due to the below issues, we only check equality of the output tensor and not the index tensor.

1. We can't verify index tensors since VK-float16 vs CPU-float32 deltas can change which index in a pool is the maximum. That can yield completely different integers in the index tensor. Hence, we only verify the output tensor not the index tensor.
2. To actually visualize the index tensor, we need to re-construct the int32 values from the int64 values. Since the `torch.int64` index tensor is serialized as `int32` in Vulkan, Python expects int64 but C++ writes to the buffer as though it is for int32. Hence, we must apply some computation to re-construct the tensor. See below for details. A helper function was included in an earlier version of this change, but was removed for conciseness since we aren't checking that index tensor anyway.

For example, if the first and second elements return 16 and 17, we get this value as the first element:
```
73014444048 = 1000100000000000000000000000000010000
```
We must split this int64 into two int32 values, and construct a new tensor accordingly.
```
10001 | 00000000000000000000000000010000
10001 | 10000
17 | 16
```

Differential Revision: [D54962492](https://our.internmc.facebook.com/intern/diff/D54962492/)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54962492

jorgep31415 added a commit that referenced this pull request Mar 21, 2024
Pull Request resolved: #2548

Due to the below issues, we only check equality of the output tensor and not the index tensor.

1. We can't verify index tensors since VK-float16 vs CPU-float32 deltas can change which index in a pool is the maximum. That can yield completely different integers in the index tensor. Hence, we only verify the output tensor not the index tensor.
2. To actually visualize the index tensor, we need to re-construct the int32 values from the int64 values. Since the `torch.int64` index tensor is serialized as `int32` in Vulkan, Python expects int64 but C++ writes to the buffer as though it is for int32. Hence, we must apply some computation to re-construct the tensor. See below for details. A helper function was included in an earlier version of this change, but was removed for conciseness since we aren't checking that index tensor anyway.

For example, if the first and second elements return 16 and 17, we get this value as the first element:
```
73014444048 = 1000100000000000000000000000000010000
```
We must split this int64 into two int32 values, and construct a new tensor accordingly.
```
10001 | 00000000000000000000000000010000
10001 | 10000
17 | 16
```
ghstack-source-id: 219506266
@exported-using-ghexport

Differential Revision: [D54962492](https://our.internmc.facebook.com/intern/diff/D54962492/)
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in a41ac1c.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants