Skip to content

Commit b9e9abd

Browse files
authored
Applied quantization for linear with bias=True in pre_quantization
Differential Revision: D71573144 Pull Request resolved: #9472
1 parent 1a9a59b commit b9e9abd

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

examples/models/llama/source_transformation/pre_quantization.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ def replacement_fn(child: torch.nn.Module) -> torch.nn.Module:
4444
# pyre-fixme[6]: For 2nd argument expected `int` but got `Union[Module,
4545
# Tensor]`.
4646
child.out_features,
47-
bias=False,
47+
bias=child.bias is not None,
4848
device=child.weight.device,
4949
groupsize=group_size,
5050
precision=precision,

0 commit comments

Comments
 (0)