Skip to content

Commit ce9413d

Browse files
committed
export norms as f32
1 parent 26e8f23 commit ce9413d

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

convert-hf-to-gguf.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -160,7 +160,7 @@ def write_tensors(self):
160160
data = data.astype(np.float32)
161161

162162
# TODO: Why cant we use these float16 as-is? There should be not reason to store float16 as float32
163-
if self.ftype == 1 and data_dtype == np.float16 and n_dims == 1:
163+
if self.ftype == 1 and data_dtype == np.float16 and (n_dims == 1 or new_name.endswith("_norm.weight")):
164164
data = data.astype(np.float32)
165165

166166
# if f16 desired, convert any float32 2-dim weight tensors to float16

0 commit comments

Comments
 (0)