-
Notifications
You must be signed in to change notification settings - Fork 608
Use symmetric weights for convs and int8 in the default quantizer #8344
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/8344
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New Failures, 5 Unrelated FailuresAs of commit 4b02da3 with merge base ee7d388 ( NEW FAILURES - The following jobs have failed:
FLAKY - The following job failed but was likely due to flakiness present on trunk:
BROKEN TRUNK - The following jobs failed but was present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request was exported from Phabricator. Differential Revision: D69405797 |
78b7bf9
to
bde2dd7
Compare
This pull request was exported from Phabricator. Differential Revision: D69405797 |
bde2dd7
to
4b02da3
Compare
This pull request was exported from Phabricator. Differential Revision: D69405797 |
Summary:
As titled. int8 should give better performance with Cadence kernels, since they're not improving uint8 anymore.
The upcoming (quantized) convolution kernel needs symmetric weights, so we make that change as well.
Differential Revision: D69405797