Skip to content

aten.full.default #3013

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed

Conversation

copyrightly
Copy link
Contributor

Summary:
We implement aten.full.default which has the following signature.
https://www.internalfb.com/code/fbsource/[8db4b5872791bb88a62ecaa60b667ee4c1b189bf]/fbcode/caffe2/aten/src/ATen/native/native_functions.yaml?lines=2801

In order to bypass graph build error, we simply create null value for the following arg types:

  • torch.device
  • torch.dtype
  • torch.layout

since they don't have any effect to our operator implementation on Vulkan. (Note that torch.layout is a totally different concept from GPUMemoryLayout on Vulkan.)

Differential Revision: D56049674

Copy link

pytorch-bot bot commented Apr 12, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/3013

Note: Links to docs will display an error until the docs builds have been completed.

❌ 2 New Failures

As of commit ac462c2 with merge base 74576e8 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 12, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D56049674

Summary:

We implement [`aten.full.default`](https://pytorch.org/docs/stable/generated/torch.full.html) which has the following signature.
```
func: full(SymInt[] size, Scalar fill_value, *, ScalarType? dtype=None, Layout? layout=None, Device? device=None, bool? pin_memory=None) -> Tensor
```

In order to bypass graph build error, we simply create null value for the following arg types:
- torch.device
- torch.dtype
- torch.layout

since they don't have any effect to our operator implementation on Vulkan. (Note that [`torch.layout`](https://pytorch.org/docs/stable/tensor_attributes.html#torch.layout) is a totally different concept from `GPUMemoryLayout` on Vulkan.)

Reviewed By: jorgep31415

Differential Revision: D56049674
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D56049674

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in eb44e88.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants