-
Notifications
You must be signed in to change notification settings - Fork 12.1k
Vulkan: Add DP4A MMQ and Q8_1 quantization shader #12135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Hi @0cc4m, What are you thinking as the long-term plan for this? int8 everywhere (like CUDA?), or just for certain operations or HW that benefits from it? I think int8 is likely a win for mat-vec mul in most cases - even where we're not currently math limited, it should have lower register usage and avoid some of the annoying perf issues where the compiler doesn't schedule things well. And for cases that are math-limited (particularly older HW) it should give a big boost. For coopmat/coopmat2, while int8 is faster in terms of peak rate than fp16 (at least on NVIDIA), the int32 accumulator takes up a lot of register space and limits the tile sizes, and may not always be a win. Overall I'm excited to have the quantization path in place for the B matrix, it enables exploring a lot of new optimizations. |
Basically I started out with this with the goal of exploring new options to improve prompt processing on non-coopmat hardware, and also just to understand how to use int8 for acceleration. I don't think it's worth using over fp16/fp32 on hardware that doesn't have integer dot product acceleration, but for others it may be worth opening a shader path that utilizes it. With Vega20 and also Nvidia Pascal the Vulkan backend is currently noticeably behind, and I think this may be a way to close the gap.
Yes, looking into that would be the next step after this.
Since I store an entire q8_1 block in k-direction in registers, instead of loading single values for each k, I already have to reconsider tile sizes here, or rethink that approach. The L-tile seems slow and I assume that means it's register-limited.
Yeah, you used fp16 for coopmat2 to reduce memory pressure, maybe it would be worth moving to q8_1? Dequantization in the shader would not require much more compute. |
This shader already makes a positive difference on AMD and a huge difference on Intel. A770 performance is finally looking more like expected.
|
Yeah you're right. With some changes I got it working on my RX 470, which has no FP16 (do all GPUs with DP4A support FP16?) and no DP4A. It's... slow. my changes to make it run--------------------- ggml/src/ggml-vulkan/ggml-vulkan.cpp ---------------------
index b6cd2f21..8df9383c 100644
@@ -1926,6 +1926,7 @@ static void ggml_vk_load_shaders(vk_device& device) {
CREATE_MM(GGML_TYPE_IQ4_NL, pipeline_dequant_mul_mat_mat_id[GGML_TYPE_IQ4_NL].f16acc, matmul_id_iq4_nl_f32, _f16acc, mmq_wg_denoms, warptile_mmq, vk_mat_mat_id_push_constants, 4, _id);
#undef CREATE_MM2
#undef CREATE_MM
+#undef CREATE_MMQ
} else {
// Create 6 variants, {s,m,l}x{unaligned,aligned}
#define CREATE_MM(TYPE, PIPELINE_NAME, NAMELC, F16ACC, WG_DENOMS, WARPTILE, PUSHCONST, PARAMCOUNT, ID) \
@@ -1942,6 +1943,14 @@ static void ggml_vk_load_shaders(vk_device& device) {
if (device->mul_mat ## ID ## _s[TYPE]) \
ggml_vk_create_pipeline(device, device-> PIPELINE_NAME ->a_s, #NAMELC #F16ACC "_aligned_s", NAMELC ## _aligned ## F16ACC ## _fp32_len, NAMELC ## _aligned ## F16ACC ## _fp32_data, "main", PARAMCOUNT, sizeof(PUSHCONST), s_ ## WG_DENOMS, s_ ## WARPTILE, s_align); \
+#define CREATE_MMQ(TYPE, PIPELINE_NAME, NAMELC, F16ACC, WG_DENOMS, WARPTILE, PUSHCONST, PARAMCOUNT, ID) \
+ if (device->mul_mat ## ID ## _l[TYPE]) \
+ ggml_vk_create_pipeline(device, device-> PIPELINE_NAME ->l, #NAMELC #F16ACC "_l", NAMELC ## F16ACC ## _fp32_len, NAMELC ## F16ACC ## _fp32_data, "main", PARAMCOUNT, sizeof(PUSHCONST), l_ ## WG_DENOMS, l_ ## WARPTILE, 1); \
+ if (device->mul_mat ## ID ## _m[TYPE]) \
+ ggml_vk_create_pipeline(device, device-> PIPELINE_NAME ->m, #NAMELC #F16ACC "_m", NAMELC ## F16ACC ## _fp32_len, NAMELC ## F16ACC ## _fp32_data, "main", PARAMCOUNT, sizeof(PUSHCONST), m_ ## WG_DENOMS, m_ ## WARPTILE, 1); \
+ if (device->mul_mat ## ID ## _s[TYPE]) \
+ ggml_vk_create_pipeline(device, device-> PIPELINE_NAME ->s, #NAMELC #F16ACC "_s", NAMELC ## F16ACC ## _fp32_len, NAMELC ## F16ACC ## _fp32_data, "main", PARAMCOUNT, sizeof(PUSHCONST), s_ ## WG_DENOMS, s_ ## WARPTILE, 1); \
+
CREATE_MM(GGML_TYPE_F32, pipeline_matmul_f32, matmul_f32_f32, , wg_denoms, warptile, vk_mat_mat_push_constants, 3, );
CREATE_MM(GGML_TYPE_F32, pipeline_matmul_f32_f16, matmul_f32_f16, , wg_denoms, warptile, vk_mat_mat_push_constants, 3, );
CREATE_MM(GGML_TYPE_F16, pipeline_matmul_f16.f32acc, matmul_f16, , wg_denoms, warptile, vk_mat_mat_push_constants, 3, );
@@ -1968,6 +1977,9 @@ static void ggml_vk_load_shaders(vk_device& device) {
CREATE_MM(GGML_TYPE_IQ4_XS, pipeline_dequant_mul_mat_mat[GGML_TYPE_IQ4_XS].f32acc, matmul_iq4_xs_f32, , mmq_wg_denoms, warptile_mmq, vk_mat_mat_push_constants, 3, );
CREATE_MM(GGML_TYPE_IQ4_NL, pipeline_dequant_mul_mat_mat[GGML_TYPE_IQ4_NL].f32acc, matmul_iq4_nl_f32, , mmq_wg_denoms, warptile_mmq, vk_mat_mat_push_constants, 3, );
+ CREATE_MMQ(GGML_TYPE_Q4_0, pipeline_dequant_mul_mat_mat_q8_1[GGML_TYPE_Q4_0].f32acc, matmul_q4_0_q8_1, , mmq_wg_denoms, warptile_mmq, vk_mat_mat_push_constants, 3, );
+ CREATE_MMQ(GGML_TYPE_Q8_0, pipeline_dequant_mul_mat_mat_q8_1[GGML_TYPE_Q8_0].f32acc, matmul_q8_0_q8_1, , mmq_wg_denoms, warptile_mmq, vk_mat_mat_push_constants, 3, );
+
CREATE_MM(GGML_TYPE_F32, pipeline_matmul_id_f32, matmul_id_f32_f32, , wg_denoms, warptile, vk_mat_mat_push_constants, 4, _id);
CREATE_MM(GGML_TYPE_F16, pipeline_matmul_id_f16.f32acc, matmul_id_f16, , wg_denoms, warptile, vk_mat_mat_push_constants, 4, _id);
CREATE_MM(GGML_TYPE_F16, pipeline_matmul_id_f16_f32.f32acc, matmul_id_f16_f32, , wg_denoms, warptile, vk_mat_mat_push_constants, 4, _id);
@@ -1993,6 +2005,7 @@ static void ggml_vk_load_shaders(vk_device& device) {
CREATE_MM(GGML_TYPE_IQ4_XS, pipeline_dequant_mul_mat_mat_id[GGML_TYPE_IQ4_XS].f32acc, matmul_id_iq4_xs_f32, , mmq_wg_denoms, warptile_mmq, vk_mat_mat_id_push_constants, 4, _id);
CREATE_MM(GGML_TYPE_IQ4_NL, pipeline_dequant_mul_mat_mat_id[GGML_TYPE_IQ4_NL].f32acc, matmul_id_iq4_nl_f32, , mmq_wg_denoms, warptile_mmq, vk_mat_mat_id_push_constants, 4, _id);
#undef CREATE_MM
+#undef CREATE_MMQ
}
// mul mat vec
@@ -2431,7 +2444,8 @@ static vk_device ggml_vk_get_device(size_t idx) {
device->coopmat_support = false;
}
- device->integer_dot_product = device->integer_dot_product && shader_integer_dot_product_props.integerDotProductAccumulatingSaturating4x8BitPackedSignedAccelerated;
+ //device->integer_dot_product = device->integer_dot_product && shader_integer_dot_product_props.integerDotProductAccumulatingSaturating4x8BitPackedSignedAccelerated;
+ device->integer_dot_product = true;
std::vector<vk::QueueFamilyProperties> queue_family_props = device->physical_device.getQueueFamilyProperties();
@@ -3168,8 +3182,10 @@ static vk_matmul_pipeline ggml_vk_get_mul_mat_mat_pipeline(ggml_backend_vk_conte
default:
return nullptr;
}
-
- return ctx->device->pipeline_dequant_mul_mat_mat_q8_1[src0_type].f16acc;
+ if (ctx->device->fp16)
+ return ctx->device->pipeline_dequant_mul_mat_mat_q8_1[src0_type].f16acc;
+ else
+ return ctx->device->pipeline_dequant_mul_mat_mat_q8_1[src0_type].f32acc;
}
if (src1_type != GGML_TYPE_F32 && !ctx->device->coopmat2) {
--------------- ggml/src/ggml-vulkan/vulkan-shaders/mul_mmq.comp ---------------
index 81fa7b53..780182a3 100644
@@ -4,7 +4,7 @@
#extension GL_EXT_shader_16bit_storage : require
#extension GL_EXT_shader_explicit_arithmetic_types_int8 : require
-#extension GL_EXT_integer_dot_product : require
+//#extension GL_EXT_integer_dot_product : require
#ifdef FLOAT16
#extension GL_EXT_shader_explicit_arithmetic_types_float16 : require
@@ -318,9 +318,12 @@ void main() {
[[unroll]] for (uint cr = 0; cr < TM; cr++) {
const uint cache_a_idx = wsir * TM + cr;
const uint sums_idx = (wsic * TN + cc) * (WMITER * TM) + wsir * TM + cr;
- int32_t q_sum = 0;
+ float q_sum = 0;
[[unroll]] for (uint idx_k = 0; idx_k < BK / 4; idx_k++) {
- q_sum = dotPacked4x8AccSatEXT(cache_a[cache_a_idx].qs[idx_k], cache_b[cc].qs[idx_k], q_sum);
+ //q_sum = dotPacked4x8AccSatEXT(cache_a[cache_a_idx].qs[idx_k], cache_b[cc].qs[idx_k], q_sum);
+ vec4 cav = vec4(unpack8(cache_a[cache_a_idx].qs[idx_k]));
+ vec4 cbv = vec4(unpack8(cache_b[cc].qs[idx_k]));
+ q_sum += dot(cav, cbv);
}
#if QUANT_AUXF == 1
@@ -330,7 +333,7 @@ void main() {
// const float factor = float(cache_a[cache_a_idx].d) * float(cache_b[cc].d);
#endif
- sums[sums_idx] = ACC_TYPE(fma(float(q_sum), factor, float(sums[sums_idx])));
+ sums[sums_idx] = ACC_TYPE(fma(q_sum, factor, float(sums[sums_idx])));
}
}
}
I recreated the dot product instruction using floats as that ended up being faster than using ints. On my card it takes eight cycles to extract the int8s from the int32s and another four to do the FMAs. If we use float B like what's on master that becomes four FMAs, and of course with DP4A it's a single 1 or 2 cycle instruction. It's possible to make this run much faster on old GPUs by using the old mul_mm and dequantizing the Q8_1 B matrix first, but that's probably only worth doing if we see good improvements on the matvec side.
Since I don't have DP4A and am compute rather than memory bound for mat vec I won't be able to optimize this properly. At this point I'm probably going to stick with the float implementation until I get a new GPU 😞. |
Ah yeah, I forgot about that. You could look into q8_1 src1 support on mul_mm and mat_vec if you got too much time on your hand, but not sure if it would help.
All except Nvidia Pascal/GTX 1000 |
I now remembered that I was thinking of @daniandtheweb when I pinged netrunnereve, my bad. RX 5700 XT should profit a lot from the use of DP4A. |
RDNA1 removed the V_DOT* instructions from GCN, they only returned in RDNA2 so no this will not help RX 5700 XT. |
Oh wow, that's terrible. |
That's pretty much the correct reaction to rdna1 in general. Note it's a bit more complex than that as navi12 (the variant with hbm used in the pro 520) dose have a few v_dot variants, but that's an edge case hardly worth considering. |
Apparently also the RX 5500XT (Navi 14) does support it. It's quite unfortunate that the 5700 series lacks any support for it. |
You're right, it does. This is very confusing. |
I added a basic coopmat implementation, but performance is pretty bad currently. It performs worse than the integer dot product version and doesn't support AMD yet. I'll look into fixing these problems. @jeffbolznv You spotted a number of performance issues with my fp16 coopmat implentation last time, do you see any low-hanging fruit this time or is the way I implemented it just not good enough yet? |
I think the biggest perf issue is the dynamic indexing to get the float factor in the muladd loop. But I didn't fully follow the code, so I don't have a specific suggestion for what to do about it. Maybe you can load the factors directly from shared memory into a matrix and do a componentwise matrix*matrix multiply? |
That's a very good idea, I forgot that componentwise operations and conversions are possible. As long as I can convert the int32 matrix to the accumulator type, I should be able to run and store matrices in a similar way to the existing coopmat shader. I'll give it a shot. |
Just a random thought but have you compared text generation speeds with FP16 B versus FP32 B on memory bound GPUs? This should give us some idea as to what improvements we can expect with Q8 matvec. Meanwhile for AMD at least I noticed that they have this instruction for a FP16 dot product, which should finish in a single cycle:
I think this can be triggered with a |
I have not looked into that yet, but I think there should be some improvement from the combination of reduced memory bandwidth requirement and simplified arithmetic using integer vec dot.
Yeah, fp16 dot product. I have in the past tried to use it, without success. But that was so long ago that I'm not sure if my implementation was correct. Sadly, that instruction is (un)available on the same GPUs as the integer dot instructions. Maybe the mul_mm shader can be refactored slightly to enable vdot, at least it would not need a separate implementation. |
It's actually not simple to implement this, if I calculate and load the factors just before the calculations, it's just as slow as the current implementation. I would have to load them earlier, or this is not gonna work. I haven't been able to think of a good way of doing this yet. If I can't think of one, I'll finish this PR without the coopmat implementation for now. |
32bbd92
to
34ff5e1
Compare
I gave up on coopmat for now, so that I can finish the rest. Performance got a little better. ggml_vulkan: 0 = AMD Radeon (TM) Pro VII (RADV VEGA20) (radv) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: none
ggml_vulkan: 0 = AMD Radeon RX 6800 XT (RADV NAVI21) (radv) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 65536 | int dot: 1 | matrix cores: none
ggml_vulkan: 0 = Intel(R) Arc(tm) A770 Graphics (DG2) (Intel open-source Mesa driver) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 65536 | int dot: 1 | matrix cores: none
ggml_vulkan: 0 = NVIDIA GeForce RTX 3090 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: none
build: 4375415 (4938) |
Hi @0cc4m, I'm interested in trying out dp4a for the mat-vec mul shaders. Could you maybe split out and submit the code for quantizing to q8_1 to unblock that? |
I'll see what I can do, but splitting that out might be about as much work as finishing this PR. I should hopefully be able to get back to it within the next days. |
No worries, I can just branch from here for now and I'll deal with any conflicts later. |
I was able to get the q4_k matvecmul shader working based on this change. I didn't see a perf boost on Ada, but I need to try it on Ampere where fp32 is slower and I think there should be more upside. |
Here are some up-to-date results. While testing without fp16 support, I noticed that there must be a significant problem with the current mul_mm fp16 implementation, cause performance increases by a lot when disabling it and using fp32. I'll have to look into that. I'll look into mul_mat_id, then this PR should be ready. Radeon Pro VII:
A770:
RTX 3090:
|
MUL_MAT_ID is not straightforward, so I'm leaving it out as well. This should be ready now. |
…mul sizes for mmq
RTX 3090:
AMD Radeon RX 6800 XT:
AMD Radeon Pro VII:
Intel A770:
|
Hello, I am using a discrete card intel xe dg1(80eu) when I run
but in this repo, So does that mean, even if this PR is merged, I won't benefit from it? Thanks. |
This PR does not require coopmat, no. I'm not familiar with that GPU, but as long as it supports accelerated integer dot product (DP4A), which as far as I know all Intel Xe GPUs do, it will benefit from this PR, yes. |
Thanks. I just double checked VkPhysicalDeviceVulkan13Properties:
|
Yeah, that's enough. |
Steam Deck APU:
|
I bumped the Windows Vulkan SDK version for the github build, so that it supports compiling the GLSL integer dot extension. The Ubuntu build already had it. |
@jeffbolznv How far did you get with DP4A mmv? Judging from these results it might be very good for specific AMD generations and for all Intel GPUs. |
I ported q4_k to use dp4a, changes are at https://github.com/jeffbolznv/llama.cpp/tree/q4_k_int8 if you want to try it. I didn't see a meaningful improvement on RTX 3070 or 4070, it's maybe faster enough to pay for the overhead of the quantization but not enough to really help. |
Thank you for this commit. DP4A MMQ give a great speed for pp with q4-0 and intel A770. And for @jeffbolznv patch DP4A with q4_k it give a nice boost to tg with Intel A770.
|
Yeah, it can be, it's just gonna take a little while to implement all of the repacking functions. I'll look into k-quants and DP4A MMV soon. |
There seems to be an issue here with NaNs that I missed, that leads to incoherence. I'll look into it. |
This is a basic VK_KHR_shader_integer_dot_product (DP4A) implementation for matrix-matrix multiplication. I added a quantization shader that can quantize float32 src1 into q8_1, and an MMQ shader that can multiply a q8_0 src0 with a q8_1 src1.
Features I have to implement before this could be merged:
I'm opening this already to get some feedback about the implementation. Thank you @jeffbolznv for finishing the GLSL integer dot extension.
@netrunnereve In the long run we probably also want to use DP4A and q8_1 for matrix-vector multiplication to reduce the memory bandwidth bottleneck. Let me know if you want to look into that.
As far as hardware goes, integer dot product / DP4A is supported by Nvidia since Pascal/GTX 1000, AMD since Vega20/Radeon VII/MI50 (but not most of RDNA1/RX 5000 series), and Intel Xe (I think).