Skip to content

[ET-VK][ez] Fix linear weight int4 test due to change in ATen API #7751

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jan 17, 2025

Conversation

pytorchbot
Copy link
Collaborator

This PR was created by the merge bot to help merge the original PR into the main branch.
ghstack PR number: #7739
^ Please use this as the source of truth for the PR details, comments, and reviews
ghstack PR base: https://github.com/pytorch/executorch/tree/gh/SS-JIA/170/base
ghstack PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/170/head
Merge bot PR base: https://github.com/pytorch/executorch/tree/main
Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/SS-JIA/170/orig
@diff-train-skip-merge

Pull Request resolved: #7739

## Context

Recently the ATen API for 4-bit quantized linear has changed, so our test must adapt to the change in API.

Concretely, the changes in API were:

* The `_for_cpu` suffix was added to the operator name
* The `_convert_weight_to_int4pack_mm` operator now expects unpacked 4-bit weights instead of a packed scheme where 2 4-bit values are packed into a single 8-bit value.
ghstack-source-id: 261959346
@exported-using-ghexport

Differential Revision: [D68333687](https://our.internmc.facebook.com/intern/diff/D68333687/)
Copy link

pytorch-bot bot commented Jan 17, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/7751

Note: Links to docs will display an error until the docs builds have been completed.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Jan 17, 2025
@SS-JIA SS-JIA self-requested a review January 17, 2025 23:58
@SS-JIA SS-JIA merged commit 108ec68 into main Jan 17, 2025
37 of 44 checks passed
@SS-JIA SS-JIA deleted the gh/SS-JIA/170/orig branch January 17, 2025 23:58
YIWENX14 pushed a commit that referenced this pull request Jan 28, 2025
)

Pull Request resolved: #7739

## Context

Recently the ATen API for 4-bit quantized linear has changed, so our test must adapt to the change in API.

Concretely, the changes in API were:

* The `_for_cpu` suffix was added to the operator name
* The `_convert_weight_to_int4pack_mm` operator now expects unpacked 4-bit weights instead of a packed scheme where 2 4-bit values are packed into a single 8-bit value.
ghstack-source-id: 261959346
@exported-using-ghexport

Differential Revision: [D68333687](https://our.internmc.facebook.com/intern/diff/D68333687/)

Co-authored-by: Stephen Jia <[email protected]>
zonglinpeng pushed a commit to zonglinpeng/executorch that referenced this pull request Jan 30, 2025
…torch#7751)

Pull Request resolved: pytorch#7739

## Context

Recently the ATen API for 4-bit quantized linear has changed, so our test must adapt to the change in API.

Concretely, the changes in API were:

* The `_for_cpu` suffix was added to the operator name
* The `_convert_weight_to_int4pack_mm` operator now expects unpacked 4-bit weights instead of a packed scheme where 2 4-bit values are packed into a single 8-bit value.
ghstack-source-id: 261959346
@exported-using-ghexport

Differential Revision: [D68333687](https://our.internmc.facebook.com/intern/diff/D68333687/)

Co-authored-by: Stephen Jia <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: not user facing
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants