Skip to content

gguf : fix strings to not be null-terminated #2839

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Aug 27, 2023
Merged

gguf : fix strings to not be null-terminated #2839

merged 2 commits into from
Aug 27, 2023

Conversation

ggerganov
Copy link
Member

close #2836

@ggerganov
Copy link
Member Author

cc @philpax @slaren

@slaren
Copy link
Member

slaren commented Aug 27, 2023

Maybe this one needs to be updated too?

https://github.com/ggerganov/llama.cpp/blob/34e5c9afe5816368ad75fdebacffc172cb863a7e/ggml.c#L20232

@slaren
Copy link
Member

slaren commented Aug 27, 2023

I am trying to compare a f16 model converted with convert.py, then taking that the same model and "quantizing" to f16. The result should be the same, right? But I am seeing differences in the headers. It looks like there is a lot more padding in the file created by quantize. Left is from quantize, right is from convert.pty:
image

I'll try re-converting the file, it may have been created with an outdated version of convert.py.

@ggerganov
Copy link
Member Author

The left one is GGUFv2 and the right one is GGUFv1 - see the 5th byte

@slaren
Copy link
Member

slaren commented Aug 27, 2023

Yes, after reconverting the file the headers look almost identical. Still, the files have a different hash. It's probably not an issue, but I'll see if I can figure what exactly are the differences.

@ggerganov
Copy link
Member Author

Yes, we should figure out where this difference comes from

@slaren
Copy link
Member

slaren commented Aug 27, 2023

Ok, the only difference is the metadata general.quantization_version. I commented it from llama.cpp, and the results are identical now, exact same md5sum.

@ggerganov
Copy link
Member Author

Looks like this change is backwards compatible, so no need for re-quantizing models in general, correct?

@slaren
Copy link
Member

slaren commented Aug 27, 2023

In practice, I doubt this would cause any issues, but the strings read from the python library will have a trailing NUL character. However, it should be easy to modify the library to remove them, there is no reason to have strings with NULs anyway.

@ggerganov ggerganov merged commit 103cfaf into master Aug 27, 2023
@ggerganov ggerganov deleted the fix-gguf-str branch August 27, 2023 18:50
@philpax
Copy link

philpax commented Aug 27, 2023

All seems to work well, thanks!

akawrykow pushed a commit to akawrykow/llama.cpp that referenced this pull request Aug 29, 2023
* gguf : fix strings to not be null-terminated

ggml-ci

* gguf : fix gguf_add_tensor name
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Quantize adds \0 to the end of GGUF strings
3 participants