-
Notifications
You must be signed in to change notification settings - Fork 607
Qualcomm AI Engine Direct - Add rewrite function of observer #10093
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Qualcomm AI Engine Direct - Add rewrite function of observer #10093
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/10093
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit 3caeaab with merge base 95a1db5 ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Hi @cccclai, This PR is for the requested functionality to overwrite a quanization parameters after calibration. It can handle shared observer too. Thank you! |
@chunit-quic thanks for putting up the pr. cc: @sxu @billmguo |
Mind sharing a bit more details for the request? I probably miss it |
No problem. :)
After a brief discussion your team, we concluded that the rewriting stage should occur after calibration but before conversion. That's essentially the background. |
Looks like this would work to me, however it would be great if someone from AO can confirm this kind of overwriting activation postprocessing submodule is enough and there's no other references to them. |
Maybe @jerryzh168 can confirm? |
@kirklandsign can you help checking the CI signal for this one too? I remember there are some errors. |
Hey sorry for being late, can you rebase this PR? |
2c9c79c
to
b92c102
Compare
Done! Thank you. |
@cccclai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Hey there some errors likely due to the pytorch pin update. Mind rebasing again? Sorry for the inconvinience |
No problem. Just rebased. Pleasefel free to let me know if there is any issue. Thank you! |
qscheme=torch.per_tensor_affine, | ||
) | ||
|
||
rewrite_prepared_observer(prepared, {"activation_post_process_2": new_obs}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sxu does your callback work for this? maybe you can share your example
Did you rebase? Seems like there is no commit.. |
b92c102
to
943eb09
Compare
Ops, Sorry I rebased locally without push. Thank you for pointing out! Done. |
1 similar comment
Ops, Sorry I rebased locally without push. Thank you for pointing out! Done. |
@cccclai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
There seems to be a lint error, and that might be reason merging fails. Since #11049 is landed, can you apply the according changes and see if the error is gone? |
- Add function to rewrite observer after prepare_pt2e - Add corresponding test case
943eb09
to
3caeaab
Compare
Fixed. Thanks a lot for pointing out the error! |
@cccclai has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
I dont quite follow why we cannot do this by allowing module specific customization? FOr example can I not say |
Uh oh!
There was an error while loading. Please reload this page.