Skip to content

Add transfer shaders for buffer storage tensors #3684

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

SS-JIA
Copy link
Contributor

@SS-JIA SS-JIA commented May 20, 2024

Summary:

Context

Add transfer shaders for tensors that use buffer storage, in preparation for quantization support.

Differential Revision: D57577019

Copy link

pytorch-bot bot commented May 20, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/3684

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit f4a3a6a with merge base ce751fc (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 20, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57577019

SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request May 20, 2024
Summary:

## Context

Add transfer shaders for tensors that use buffer storage, in preparation for quantization support.

Differential Revision: D57577019
@SS-JIA SS-JIA force-pushed the export-D57577019 branch from c951e5d to 3577404 Compare May 20, 2024 20:05
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57577019

@SS-JIA SS-JIA force-pushed the export-D57577019 branch from 3577404 to 9eedc1f Compare May 20, 2024 20:16
SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request May 20, 2024
Summary:

## Context

Add transfer shaders for tensors that use buffer storage, in preparation for quantization support.

Differential Revision: D57577019
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57577019

SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request May 21, 2024
Summary:

## Context

Add transfer shaders for tensors that use buffer storage, in preparation for quantization support.

Differential Revision: D57577019
@SS-JIA SS-JIA force-pushed the export-D57577019 branch from 9eedc1f to 8dc7fdf Compare May 21, 2024 14:28
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57577019

SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request May 21, 2024
Summary:

## Context

Add transfer shaders for tensors that use buffer storage, in preparation for quantization support.

Differential Revision: D57577019
@SS-JIA SS-JIA force-pushed the export-D57577019 branch from 8dc7fdf to c670e6f Compare May 21, 2024 14:30
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57577019

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57577019

SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request May 21, 2024
Summary:
Pull Request resolved: pytorch#3684

## Context

Add support for tensors that use buffer storage, in preparation for quantization support. For more context, the initial versions of quantized operators will target buffer based tensors. This is because the primary use-case is LLMs, which may contain tensors that may exceed the texture limits.

Differential Revision: D57577019
@SS-JIA SS-JIA force-pushed the export-D57577019 branch from c670e6f to 52ff39b Compare May 21, 2024 22:29
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57577019

@SS-JIA SS-JIA force-pushed the export-D57577019 branch from 52ff39b to 86f9077 Compare May 22, 2024 14:25
SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request May 22, 2024
Summary:
Pull Request resolved: pytorch#3684

## Context

Add support for tensors that use buffer storage, in preparation for quantization support. For more context, the initial versions of quantized operators will target buffer based tensors. This is because the primary use-case is LLMs, which may contain tensors that may exceed the texture limits.

Differential Revision: D57577019
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57577019

SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request May 22, 2024
Summary:
Pull Request resolved: pytorch#3684

## Context

Add support for tensors that use buffer storage, in preparation for quantization support. For more context, the initial versions of quantized operators will target buffer based tensors. This is because the primary use-case is LLMs, which may contain tensors that may exceed the texture limits.

Differential Revision: D57577019
@SS-JIA SS-JIA force-pushed the export-D57577019 branch from 86f9077 to a03f9dc Compare May 22, 2024 14:31
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57577019

SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request May 22, 2024
Summary:
Pull Request resolved: pytorch#3684

## Context

Add support for tensors that use buffer storage, in preparation for quantization support. For more context, the initial versions of quantized operators will target buffer based tensors. This is because the primary use-case is LLMs, which may contain tensors that may exceed the texture limits.

Differential Revision: D57577019
@SS-JIA SS-JIA force-pushed the export-D57577019 branch from a03f9dc to 2380ff0 Compare May 22, 2024 14:36
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57577019

SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request May 22, 2024
Summary:
Pull Request resolved: pytorch#3684

## Context

Add support for tensors that use buffer storage, in preparation for quantization support. For more context, the initial versions of quantized operators will target buffer based tensors. This is because the primary use-case is LLMs, which may contain tensors that may exceed the texture limits.

Differential Revision: D57577019
@SS-JIA SS-JIA force-pushed the export-D57577019 branch from 2380ff0 to a6bab95 Compare May 22, 2024 14:41
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57577019

@SS-JIA SS-JIA force-pushed the export-D57577019 branch from a6bab95 to b708283 Compare May 22, 2024 21:54
SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request May 22, 2024
Summary:
Pull Request resolved: pytorch#3684

## Context

Add support for tensors that use buffer storage, in preparation for quantization support. For more context, the initial versions of quantized operators will target buffer based tensors. This is because the primary use-case is LLMs, which may contain tensors that may exceed the texture limits.

Differential Revision: D57577019
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57577019

SS-JIA added a commit to SS-JIA/executorch-1 that referenced this pull request May 22, 2024
Summary:
Pull Request resolved: pytorch#3684

## Context

Add support for tensors that use buffer storage, in preparation for quantization support. For more context, the initial versions of quantized operators will target buffer based tensors. This is because the primary use-case is LLMs, which may contain tensors that may exceed the texture limits.

Reviewed By: yipjustin

Differential Revision: D57577019
@SS-JIA SS-JIA force-pushed the export-D57577019 branch from b708283 to d6a034f Compare May 22, 2024 23:21
Summary:
Pull Request resolved: pytorch#3684

## Context

Add support for tensors that use buffer storage, in preparation for quantization support. For more context, the initial versions of quantized operators will target buffer based tensors. This is because the primary use-case is LLMs, which may contain tensors that may exceed the texture limits.

Reviewed By: yipjustin

Differential Revision: D57577019
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D57577019

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 2d48cdc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants