-
Notifications
You must be signed in to change notification settings - Fork 12.2k
vulkan: initial support for IQ1_S and IQ1_M quantizations #11528
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Works with coopmat2 enabled! Perf is a bit low, but I'll fix it after it's merged.
I added MMV kernels for the new quants Some performance figures on Radeon 780M (~70GB/s memory bandwidth). The LLVM/AMDGPU compiler does not like the generic code at all and behaves better with the specialized shader. Before MMV kernels:
After:
|
See branch https://github.com/remyoudompheng/llama.cpp/tree/vulkan-iq-mmv for MMV kernels for IQ2 and IQ3 quants |
Very cool! I tested this and it's functionally correct and perf is better on RTX 4070. I dug into the perf a bit and realized that a significant amount of time is spent in init_iq_shmem since the LUT is so large. I think I had suggested this before, but this more unrollable loop code helps:
Even then, it's still expensive and that suggests we should be doing more work per workgroup to amortize the cost. The large shared memory allocation may also limit the number of workgroups that can run concurrently, which argues for using larger workgroups. I verified that increasing NUM_ROWS and workgroup size helps:
You don't need to do all of this at once. I think the unrollable loops is a simple change and should help everywhere. For figuring out the best values for all the knobs we'll need to get more exhaustive data from different HW and also test with real models. |
6a32664
to
dfaa96c
Compare
Rebased and added shmem sizes following #11502 |
dfaa96c
to
431b61d
Compare
Rebased and applied suggestions |
I don't have time to review this yet but all the tests are passing on my GCN cards. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
…1528) * vulkan: initial support for IQ1_S and IQ1_M quantizations * vulkan: define MMV kernels for IQ1 quantizations * devops: increase timeout of Vulkan tests again * vulkan: simplify ifdef for init_iq_shmem
…1528) * vulkan: initial support for IQ1_S and IQ1_M quantizations * vulkan: define MMV kernels for IQ1 quantizations * devops: increase timeout of Vulkan tests again * vulkan: simplify ifdef for init_iq_shmem
…1528) * vulkan: initial support for IQ1_S and IQ1_M quantizations * vulkan: define MMV kernels for IQ1 quantizations * devops: increase timeout of Vulkan tests again * vulkan: simplify ifdef for init_iq_shmem
This pull request implements basic support for the remaining I-quants (IQ1_S and IQ1_M).
Performance is not great but similar to IQ2 quantizations.
To avoid spamming shared memory, the IQ1S grid has been compressed to 2 bits per value (4kB shmem size).
Pull request is draft waiting for #11501 and #11502 to be merged