Skip to content

Enable zero-size tensors #3640

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed

Conversation

SS-JIA
Copy link
Contributor

@SS-JIA SS-JIA commented May 16, 2024

Summary:
As title.

The approach is slightly different than in PyTorch Vulkan. Instead of binding no memory, we make a small allocation. The reason for this change is to account for the possibility that some zero size tensors are used as input but the output is not zero size. In that case we still need to be able to bind the zero size tensor to a shader.

Differential Revision: D57450473

Copy link

pytorch-bot bot commented May 16, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/3640

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure

As of commit b49c28d with merge base 46ec26b (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 16, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57450473

SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request May 16, 2024
Summary:

As title.

The approach is slightly different than in PyTorch Vulkan. Instead of binding no memory, we make a small allocation. The reason for this change is to account for the possibility that some zero size tensors are used as input but the output is not zero size. In that case we still need to be able to bind the zero size tensor to a shader.

Differential Revision: D57450473
@SS-JIA SS-JIA force-pushed the export-D57450473 branch from 3a8a8f0 to 2abf37b Compare May 16, 2024 18:37
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57450473

SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request May 16, 2024
Summary:

As title.

The approach is slightly different than in PyTorch Vulkan. Instead of binding no memory, we make a small allocation. The reason for this change is to account for the possibility that some zero size tensors are used as input but the output is not zero size. In that case we still need to be able to bind the zero size tensor to a shader.

Reviewed By: yipjustin

Differential Revision: D57450473
@SS-JIA SS-JIA force-pushed the export-D57450473 branch from 2abf37b to c25d11d Compare May 16, 2024 19:46
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57450473

Summary:

As title.

The approach is slightly different than in PyTorch Vulkan. Instead of binding no memory, we make a small allocation. The reason for this change is to account for the possibility that some zero size tensors are used as input but the output is not zero size. In that case we still need to be able to bind the zero size tensor to a shader.

Reviewed By: yipjustin

Differential Revision: D57450473
@SS-JIA SS-JIA force-pushed the export-D57450473 branch from c25d11d to b49c28d Compare May 16, 2024 19:47
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57450473

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 5c70121.

copyrightly added a commit to copyrightly/executorch that referenced this pull request May 23, 2024
Summary:

`batch_norm` has been implemented in [PR 3569](pytorch#3569), but not registered due to 0-size tensors were not supported then. Since 0-size tensors are supported in [PR 3640](pytorch#3640), we can register this op now.

Differential Revision: D57707822
facebook-github-bot pushed a commit that referenced this pull request May 23, 2024
Summary:
Pull Request resolved: #3716

`batch_norm` has been implemented in [PR 3569](#3569), but not registered due to 0-size tensors were not supported then. Since 0-size tensors are supported in [PR 3640](#3640), we can register this op now.

Reviewed By: jorgep31415

Differential Revision: D57707822

fbshipit-source-id: ec293adc29a4d16ea56d6cd7bba7a3b7fa4c7d6e
kirklandsign pushed a commit to kirklandsign/executorch that referenced this pull request May 24, 2024
Summary:
Pull Request resolved: pytorch#3716

`batch_norm` has been implemented in [PR 3569](pytorch#3569), but not registered due to 0-size tensors were not supported then. Since 0-size tensors are supported in [PR 3640](pytorch#3640), we can register this op now.

Reviewed By: jorgep31415

Differential Revision: D57707822

fbshipit-source-id: ec293adc29a4d16ea56d6cd7bba7a3b7fa4c7d6e
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants