Skip to content

Commit 23a57e3

Browse files
bartowski1182teleprint-me
authored andcommitted
Remove .attention from skipped tensors to match more accurately (ggml-org#7051)
1 parent e4dfc15 commit 23a57e3

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

convert-hf-to-gguf.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1442,7 +1442,7 @@ def write_tensors(self):
14421442
experts = dict()
14431443
for name, data_torch in self.get_tensors():
14441444
# we don't need these
1445-
if name.endswith((".attention.masked_bias", ".attention.bias", ".attention.rotary_emb.inv_freq")):
1445+
if name.endswith((".attention.masked_bias", ".attention.bias", ".rotary_emb.inv_freq")):
14461446
continue
14471447

14481448
old_dtype = data_torch.dtype

0 commit comments

Comments
 (0)