Skip to content

Register quantized ops into quantization example #85

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

larryliu0820
Copy link
Contributor

Summary:
Enables the ability to export_to_pte. Previously quantized models can't run export_to_pte because some quantized ops are missing out variants. Recently we added support for custom ops and register them into EXIR by loading shared library.

This diff adds support for registering quantized ops out variants and add it into CI.

Differential Revision: D48541611

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Aug 21, 2023
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D48541611

Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks! does it work for per channel weight quant as well? (changing is_per_channel to True instead of False in the quant example)


set -e

test_buck2_quantization() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is probably a TODO to add a cmake build here later for completeness

@larryliu0820
Copy link
Contributor Author

thanks! does it work for per channel weight quant as well? (changing is_per_channel to True instead of False in the quant example)

This should work for all the existing quantized ops.

Summary:
Pull Request resolved: pytorch/executorch#85

Enables the ability to `export_to_pte`. Previously quantized models can't run `export_to_pte` because some quantized ops are missing out variants. Recently we added support for custom ops and register them into EXIR by loading shared library.

This diff adds support for registering quantized ops out variants and add it into CI.

Reviewed By: huydhn

Differential Revision: D48541611

fbshipit-source-id: 113b9af4bfc3b47673453e6f898d2205f361a879
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D48541611

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 0ece196.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants