-
Notifications
You must be signed in to change notification settings - Fork 607
Fix ._transform naming #162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This pull request was exported from Phabricator. Differential Revision: D48784931 |
This pull request was exported from Phabricator. Differential Revision: D48784931 |
215fa15
to
496fa18
Compare
This pull request was exported from Phabricator. Differential Revision: D48784931 |
496fa18
to
14cc84b
Compare
This pull request was exported from Phabricator. Differential Revision: D48784931 |
14cc84b
to
dd6b4a9
Compare
This pull request was exported from Phabricator. Differential Revision: D48784931 |
dd6b4a9
to
8984295
Compare
This pull request was exported from Phabricator. Differential Revision: D48784931 |
8984295
to
6b6a419
Compare
This pull request was exported from Phabricator. Differential Revision: D48784931 |
6b6a419
to
e6dbd49
Compare
This pull request was exported from Phabricator. Differential Revision: D48784931 |
e6dbd49
to
fb057d4
Compare
This pull request was exported from Phabricator. Differential Revision: D48784931 |
fb057d4
to
95a1d47
Compare
This pull request was exported from Phabricator. Differential Revision: D48784931 |
95a1d47
to
70a41b4
Compare
Summary: Pull Request resolved: pytorch/executorch#162 Reviewed By: manuelcandales Differential Revision: D48784931
70a41b4
to
a5673bd
Compare
This pull request was exported from Phabricator. Differential Revision: D48784931 |
a5673bd
to
39d53d9
Compare
This pull request was exported from Phabricator. Differential Revision: D48784931 |
This pull request has been merged in ab258aa. |
* MPS quantization * mps dtypes * updates * fix names * typo * no bfloat16 for older macOS * fix typo * remove failing embedding quantization from MPS runs * bfloat -> current model precision * typo * missed bfloat16 to swotch to defaulkt precision * remove int8 quantization on mps * enable cpu fallback for mps on int4 * hack int4pack_mm for torch.float * typo * disable int4 because fp16 int4pack_mm not working for float16
Reviewed By: manuelcandales
Differential Revision: D48784931