-
Notifications
You must be signed in to change notification settings - Fork 608
Fix quantization for input to reference model #2317
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/2317
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (3 Unrelated Failures)As of commit f1b2bf8 with merge base f9cad4e ( FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Will fix the double decorators causing the failures. |
backends/arm/test/ops/test_add.py
Outdated
self._test_add_tosa_BI_pipeline(self.Add2(), test_data) | ||
|
||
@unittest.skipIf( | ||
not VELA_INSTALLED, | ||
"There is no point in running U55 tests if the Vela tool is not installed", | ||
) | ||
def test_add2_u55_BI(self): | ||
test_data = (torch.ones(1, 1, 4, 4), torch.ones(1, 1, 4, 1)) | ||
@parameterized.expand(Add2.test_parameters) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@parameterized and @UnitTest causing failures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
still an issue?
eac5f44
to
63bd288
Compare
c9ff559
to
8b51570
Compare
Add the zerpoint instead of subtracting. This worked since the tests so far used the ones as inputs which quantize to a zp of -128 which gives the same np.int8 result in both cases since the int8 wraps. Also needs to round and clip the scaled values to the int8 range. Signed-off-by: Per Åstrand <[email protected]> Change-Id: Ideaed6d072a4065573b38fb7476c7dbe8ba814fd
Fix order of decorators to expand unittest first, and then parameterized input. Fix bug in add operator conversion to handle different scales correctly. Signed-off-by: Per Åstrand <[email protected]> Change-Id: Ic228cf0215e8171392776739936a53c025802fd5
8b51570
to
f1b2bf8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
backends/arm/test/ops/test_add.py
Outdated
self._test_add_tosa_BI_pipeline(self.Add2(), test_data) | ||
|
||
@unittest.skipIf( | ||
not VELA_INSTALLED, | ||
"There is no point in running U55 tests if the Vela tool is not installed", | ||
) | ||
def test_add2_u55_BI(self): | ||
test_data = (torch.ones(1, 1, 4, 4), torch.ones(1, 1, 4, 1)) | ||
@parameterized.expand(Add2.test_parameters) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
still an issue?
@digantdesai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@digantdesai merged this pull request in d06ccd2. |
Add the zerpoint instead of subtracting. This worked since the tests so far used the ones as inputs which quantize to a zp of -128 which gives the same np.int8 result in both cases since the int8 wraps. Also needs to round and clip the scaled values to the int8 range.
Signed-off-by: Per Åstrand [email protected]