You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Update on "[ET-VK][int4] patch 4-bit source transformation quantizer to support linear modules with biases"
While LLaMa does not have biases, there are some models which will have biases in their linear modules.
Add support in the source transform quantizer for biases.
Differential Revision: [D69072087](https://our.internmc.facebook.com/intern/diff/D69072087/)
[ghstack-poisoned]
0 commit comments