Skip to content

Vulkan: Add DP4A MMQ and Q8_1 quantization shader #12135

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 13 commits into from
Mar 31, 2025
Merged

Conversation

0cc4m
Copy link
Collaborator

@0cc4m 0cc4m commented Mar 1, 2025

This is a basic VK_KHR_shader_integer_dot_product (DP4A) implementation for matrix-matrix multiplication. I added a quantization shader that can quantize float32 src1 into q8_1, and an MMQ shader that can multiply a q8_0 src0 with a q8_1 src1.

Features I have to implement before this could be merged:

  • Performance tuning
  • Q4_0, Q4_1, Q5_0, Q5_1 support
  • Coopmat support
  • MUL_MAT_ID support
  • Dealing with glslc without integer dot product support
  • Clean up the GGML q8_1 changes I added for shader validation

I'm opening this already to get some feedback about the implementation. Thank you @jeffbolznv for finishing the GLSL integer dot extension.

@netrunnereve In the long run we probably also want to use DP4A and q8_1 for matrix-vector multiplication to reduce the memory bandwidth bottleneck. Let me know if you want to look into that.

As far as hardware goes, integer dot product / DP4A is supported by Nvidia since Pascal/GTX 1000, AMD since Vega20/Radeon VII/MI50 (but not most of RDNA1/RX 5000 series), and Intel Xe (I think).

@github-actions github-actions bot added Vulkan Issues specific to the Vulkan backend ggml changes relating to the ggml tensor library for machine learning labels Mar 1, 2025
@jeffbolznv
Copy link
Collaborator

Hi @0cc4m,

What are you thinking as the long-term plan for this? int8 everywhere (like CUDA?), or just for certain operations or HW that benefits from it?

I think int8 is likely a win for mat-vec mul in most cases - even where we're not currently math limited, it should have lower register usage and avoid some of the annoying perf issues where the compiler doesn't schedule things well. And for cases that are math-limited (particularly older HW) it should give a big boost.

For coopmat/coopmat2, while int8 is faster in terms of peak rate than fp16 (at least on NVIDIA), the int32 accumulator takes up a lot of register space and limits the tile sizes, and may not always be a win.

Overall I'm excited to have the quantization path in place for the B matrix, it enables exploring a lot of new optimizations.

@0cc4m
Copy link
Collaborator Author

0cc4m commented Mar 2, 2025

Hi @0cc4m,

What are you thinking as the long-term plan for this? int8 everywhere (like CUDA?), or just for certain operations or HW that benefits from it?

Basically I started out with this with the goal of exploring new options to improve prompt processing on non-coopmat hardware, and also just to understand how to use int8 for acceleration. I don't think it's worth using over fp16/fp32 on hardware that doesn't have integer dot product acceleration, but for others it may be worth opening a shader path that utilizes it.

With Vega20 and also Nvidia Pascal the Vulkan backend is currently noticeably behind, and I think this may be a way to close the gap.

I think int8 is likely a win for mat-vec mul in most cases - even where we're not currently math limited, it should have lower register usage and avoid some of the annoying perf issues where the compiler doesn't schedule things well. And for cases that are math-limited (particularly older HW) it should give a big boost.

Yes, looking into that would be the next step after this.

For coopmat/coopmat2, while int8 is faster in terms of peak rate than fp16 (at least on NVIDIA), the int32 accumulator takes up a lot of register space and limits the tile sizes, and may not always be a win.

Since I store an entire q8_1 block in k-direction in registers, instead of loading single values for each k, I already have to reconsider tile sizes here, or rethink that approach. The L-tile seems slow and I assume that means it's register-limited.

Overall I'm excited to have the quantization path in place for the B matrix, it enables exploring a lot of new optimizations.

Yeah, you used fp16 for coopmat2 to reduce memory pressure, maybe it would be worth moving to q8_1? Dequantization in the shader would not require much more compute.

@0cc4m
Copy link
Collaborator Author

0cc4m commented Mar 2, 2025

This shader already makes a positive difference on AMD and a huge difference on Intel. A770 performance is finally looking more like expected.

device model size params backend ngl test t/s before t/s after
Intel A770 llama 8B Q4_0 5.61 GiB 8.03 B Vulkan 99 pp512 103.28 ± 0.10 471.46 ± 2.81
Intel A770 llama 8B Q8_0 7.95 GiB 8.03 B Vulkan 99 pp512 97.24 ± 0.31 386.83 ± 2.28
AMD Radeon Pro VII llama 8B Q4_0 5.61 GiB 8.03 B Vulkan 99 pp512 311.16 ± 0.60 436.11 ± 0.71
AMD Radeon Pro VII llama 8B Q8_0 7.95 GiB 8.03 B Vulkan 99 pp512 307.03 ± 0.77 387.02 ± 0.88

@netrunnereve
Copy link
Collaborator

netrunnereve commented Mar 2, 2025

I don't think it's worth using over fp16/fp32 on hardware that doesn't have integer dot product acceleration, but for others it may be worth opening a shader path that utilizes it.

Yeah you're right.

With some changes I got it working on my RX 470, which has no FP16 (do all GPUs with DP4A support FP16?) and no DP4A. It's... slow.

my changes to make it run
--------------------- ggml/src/ggml-vulkan/ggml-vulkan.cpp ---------------------
index b6cd2f21..8df9383c 100644
@@ -1926,6 +1926,7 @@ static void ggml_vk_load_shaders(vk_device& device) {
         CREATE_MM(GGML_TYPE_IQ4_NL,  pipeline_dequant_mul_mat_mat_id[GGML_TYPE_IQ4_NL].f16acc,  matmul_id_iq4_nl_f32,  _f16acc, mmq_wg_denoms, warptile_mmq, vk_mat_mat_id_push_constants, 4, _id);
 #undef CREATE_MM2
 #undef CREATE_MM
+#undef CREATE_MMQ
     } else {
         // Create 6 variants, {s,m,l}x{unaligned,aligned}
 #define CREATE_MM(TYPE, PIPELINE_NAME, NAMELC, F16ACC, WG_DENOMS, WARPTILE, PUSHCONST, PARAMCOUNT, ID) \
@@ -1942,6 +1943,14 @@ static void ggml_vk_load_shaders(vk_device& device) {
         if (device->mul_mat ## ID ## _s[TYPE]) \
             ggml_vk_create_pipeline(device, device-> PIPELINE_NAME ->a_s, #NAMELC #F16ACC "_aligned_s", NAMELC ## _aligned ## F16ACC ## _fp32_len, NAMELC ## _aligned ## F16ACC ## _fp32_data, "main", PARAMCOUNT, sizeof(PUSHCONST), s_ ## WG_DENOMS, s_ ## WARPTILE, s_align);   \
 
+#define CREATE_MMQ(TYPE, PIPELINE_NAME, NAMELC, F16ACC, WG_DENOMS, WARPTILE, PUSHCONST, PARAMCOUNT, ID) \
+        if (device->mul_mat ## ID ## _l[TYPE]) \
+            ggml_vk_create_pipeline(device, device-> PIPELINE_NAME ->l, #NAMELC #F16ACC "_l", NAMELC ## F16ACC ## _fp32_len, NAMELC ## F16ACC ## _fp32_data, "main", PARAMCOUNT, sizeof(PUSHCONST), l_ ## WG_DENOMS, l_ ## WARPTILE, 1);   \
+        if (device->mul_mat ## ID ## _m[TYPE]) \
+            ggml_vk_create_pipeline(device, device-> PIPELINE_NAME ->m, #NAMELC #F16ACC "_m", NAMELC ## F16ACC ## _fp32_len, NAMELC ## F16ACC ## _fp32_data, "main", PARAMCOUNT, sizeof(PUSHCONST), m_ ## WG_DENOMS, m_ ## WARPTILE, 1);   \
+        if (device->mul_mat ## ID ## _s[TYPE]) \
+            ggml_vk_create_pipeline(device, device-> PIPELINE_NAME ->s, #NAMELC #F16ACC "_s", NAMELC ## F16ACC ## _fp32_len, NAMELC ## F16ACC ## _fp32_data, "main", PARAMCOUNT, sizeof(PUSHCONST), s_ ## WG_DENOMS, s_ ## WARPTILE, 1);   \
+
         CREATE_MM(GGML_TYPE_F32, pipeline_matmul_f32, matmul_f32_f32, , wg_denoms, warptile, vk_mat_mat_push_constants, 3, );
         CREATE_MM(GGML_TYPE_F32, pipeline_matmul_f32_f16, matmul_f32_f16, , wg_denoms, warptile, vk_mat_mat_push_constants, 3, );
         CREATE_MM(GGML_TYPE_F16, pipeline_matmul_f16.f32acc, matmul_f16, , wg_denoms, warptile, vk_mat_mat_push_constants, 3, );
@@ -1968,6 +1977,9 @@ static void ggml_vk_load_shaders(vk_device& device) {
         CREATE_MM(GGML_TYPE_IQ4_XS,  pipeline_dequant_mul_mat_mat[GGML_TYPE_IQ4_XS].f32acc,  matmul_iq4_xs_f32,  , mmq_wg_denoms, warptile_mmq, vk_mat_mat_push_constants, 3, );
         CREATE_MM(GGML_TYPE_IQ4_NL,  pipeline_dequant_mul_mat_mat[GGML_TYPE_IQ4_NL].f32acc,  matmul_iq4_nl_f32,  , mmq_wg_denoms, warptile_mmq, vk_mat_mat_push_constants, 3, );
 
+        CREATE_MMQ(GGML_TYPE_Q4_0, pipeline_dequant_mul_mat_mat_q8_1[GGML_TYPE_Q4_0].f32acc, matmul_q4_0_q8_1, , mmq_wg_denoms, warptile_mmq, vk_mat_mat_push_constants, 3, );
+        CREATE_MMQ(GGML_TYPE_Q8_0, pipeline_dequant_mul_mat_mat_q8_1[GGML_TYPE_Q8_0].f32acc, matmul_q8_0_q8_1, , mmq_wg_denoms, warptile_mmq, vk_mat_mat_push_constants, 3, );
+
         CREATE_MM(GGML_TYPE_F32, pipeline_matmul_id_f32, matmul_id_f32_f32, , wg_denoms, warptile, vk_mat_mat_push_constants, 4, _id);
         CREATE_MM(GGML_TYPE_F16, pipeline_matmul_id_f16.f32acc, matmul_id_f16, , wg_denoms, warptile, vk_mat_mat_push_constants, 4, _id);
         CREATE_MM(GGML_TYPE_F16, pipeline_matmul_id_f16_f32.f32acc, matmul_id_f16_f32, , wg_denoms, warptile, vk_mat_mat_push_constants, 4, _id);
@@ -1993,6 +2005,7 @@ static void ggml_vk_load_shaders(vk_device& device) {
         CREATE_MM(GGML_TYPE_IQ4_XS,  pipeline_dequant_mul_mat_mat_id[GGML_TYPE_IQ4_XS].f32acc,  matmul_id_iq4_xs_f32,  , mmq_wg_denoms, warptile_mmq, vk_mat_mat_id_push_constants, 4, _id);
         CREATE_MM(GGML_TYPE_IQ4_NL,  pipeline_dequant_mul_mat_mat_id[GGML_TYPE_IQ4_NL].f32acc,  matmul_id_iq4_nl_f32,  , mmq_wg_denoms, warptile_mmq, vk_mat_mat_id_push_constants, 4, _id);
 #undef CREATE_MM
+#undef CREATE_MMQ
     }
 
     // mul mat vec
@@ -2431,7 +2444,8 @@ static vk_device ggml_vk_get_device(size_t idx) {
             device->coopmat_support = false;
         }
 
-        device->integer_dot_product = device->integer_dot_product && shader_integer_dot_product_props.integerDotProductAccumulatingSaturating4x8BitPackedSignedAccelerated;
+        //device->integer_dot_product = device->integer_dot_product && shader_integer_dot_product_props.integerDotProductAccumulatingSaturating4x8BitPackedSignedAccelerated;
+        device->integer_dot_product = true;
 
         std::vector<vk::QueueFamilyProperties> queue_family_props = device->physical_device.getQueueFamilyProperties();
 
@@ -3168,8 +3182,10 @@ static vk_matmul_pipeline ggml_vk_get_mul_mat_mat_pipeline(ggml_backend_vk_conte
             default:
                 return nullptr;
         }
-
-        return ctx->device->pipeline_dequant_mul_mat_mat_q8_1[src0_type].f16acc;
+        if (ctx->device->fp16)
+            return ctx->device->pipeline_dequant_mul_mat_mat_q8_1[src0_type].f16acc;
+        else
+            return ctx->device->pipeline_dequant_mul_mat_mat_q8_1[src0_type].f32acc;
     }
 
     if (src1_type != GGML_TYPE_F32 && !ctx->device->coopmat2) {

--------------- ggml/src/ggml-vulkan/vulkan-shaders/mul_mmq.comp ---------------
index 81fa7b53..780182a3 100644
@@ -4,7 +4,7 @@
 #extension GL_EXT_shader_16bit_storage : require
 #extension GL_EXT_shader_explicit_arithmetic_types_int8 : require
 
-#extension GL_EXT_integer_dot_product : require
+//#extension GL_EXT_integer_dot_product : require
 
 #ifdef FLOAT16
 #extension GL_EXT_shader_explicit_arithmetic_types_float16 : require
@@ -318,9 +318,12 @@ void main() {
                     [[unroll]] for (uint cr = 0; cr < TM; cr++) {
                         const uint cache_a_idx = wsir * TM + cr;
                         const uint sums_idx = (wsic * TN + cc) * (WMITER * TM) + wsir * TM + cr;
-                        int32_t q_sum = 0;
+                        float q_sum = 0;
                         [[unroll]] for (uint idx_k = 0; idx_k < BK / 4; idx_k++) {
-                            q_sum = dotPacked4x8AccSatEXT(cache_a[cache_a_idx].qs[idx_k], cache_b[cc].qs[idx_k], q_sum);
+                            //q_sum = dotPacked4x8AccSatEXT(cache_a[cache_a_idx].qs[idx_k], cache_b[cc].qs[idx_k], q_sum);
+                            vec4 cav = vec4(unpack8(cache_a[cache_a_idx].qs[idx_k]));
+                            vec4 cbv = vec4(unpack8(cache_b[cc].qs[idx_k]));
+                            q_sum += dot(cav, cbv);
                         }
 
 #if QUANT_AUXF == 1
@@ -330,7 +333,7 @@ void main() {
                         // const float factor = float(cache_a[cache_a_idx].d) * float(cache_b[cc].d);
 #endif
 
-                        sums[sums_idx] = ACC_TYPE(fma(float(q_sum), factor, float(sums[sums_idx])));
+                        sums[sums_idx] = ACC_TYPE(fma(q_sum, factor, float(sums[sums_idx])));
                     }
                 }
             }
model size params backend ngl threads main_gpu sm test t/s
llama 8B Q4_0 (Master) 4.33 GiB 8.03 B Vulkan 100 8 1 none pp512 158.21 ± 0.23
llama 8B Q4_0 (PR) 4.33 GiB 8.03 B Vulkan 100 8 1 none pp512 72.88 ± 0.14
llama 8B Q8_0 (Master) 7.95 GiB 8.03 B Vulkan 100 8 1 none pp512 153.88 ± 0.32
llama 8B Q8_0 (PR) 7.95 GiB 8.03 B Vulkan 100 8 1 none pp512 64.65 ± 0.01

I recreated the dot product instruction using floats as that ended up being faster than using ints. On my card it takes eight cycles to extract the int8s from the int32s and another four to do the FMAs. If we use float B like what's on master that becomes four FMAs, and of course with DP4A it's a single 1 or 2 cycle instruction.

It's possible to make this run much faster on old GPUs by using the old mul_mm and dequantizing the Q8_1 B matrix first, but that's probably only worth doing if we see good improvements on the matvec side.

@netrunnereve In the long run we probably also want to use DP4A and q8_1 for matrix-vector multiplication to reduce the memory bandwidth bottleneck. Let me know if you want to look into that.

Since I don't have DP4A and am compute rather than memory bound for mat vec I won't be able to optimize this properly. At this point I'm probably going to stick with the float implementation until I get a new GPU 😞.

@0cc4m
Copy link
Collaborator Author

0cc4m commented Mar 3, 2025

Since I don't have DP4A and am compute rather than memory bound for mat vec I won't be able to optimize this properly. At this point I'm probably going to stick with the float implementation until I get a new GPU 😞.

Ah yeah, I forgot about that. You could look into q8_1 src1 support on mul_mm and mat_vec if you got too much time on your hand, but not sure if it would help.

do all GPUs with DP4A support FP16?

All except Nvidia Pascal/GTX 1000

@0cc4m
Copy link
Collaborator Author

0cc4m commented Mar 3, 2025

I now remembered that I was thinking of @daniandtheweb when I pinged netrunnereve, my bad. RX 5700 XT should profit a lot from the use of DP4A.

@IMbackK
Copy link
Collaborator

IMbackK commented Mar 3, 2025

RDNA1 removed the V_DOT* instructions from GCN, they only returned in RDNA2 so no this will not help RX 5700 XT.

@0cc4m
Copy link
Collaborator Author

0cc4m commented Mar 3, 2025

RDNA1 removed the V_DOT* instructions from GCN, they only returned in RDNA2 so no this will not help RX 5700 XT.

Oh wow, that's terrible.

@IMbackK
Copy link
Collaborator

IMbackK commented Mar 3, 2025

That's pretty much the correct reaction to rdna1 in general. Note it's a bit more complex than that as navi12 (the variant with hbm used in the pro 520) dose have a few v_dot variants, but that's an edge case hardly worth considering.

@daniandtheweb
Copy link
Contributor

daniandtheweb commented Mar 3, 2025

Apparently also the RX 5500XT (Navi 14) does support it. It's quite unfortunate that the 5700 series lacks any support for it.

@0cc4m
Copy link
Collaborator Author

0cc4m commented Mar 3, 2025

Apparently also the RX 5500XT (Navi 14) does support it. It's quite unfortunate that the 5700 series lacks any support for it.

You're right, it does. This is very confusing.

@0cc4m
Copy link
Collaborator Author

0cc4m commented Mar 10, 2025

I added a basic coopmat implementation, but performance is pretty bad currently. It performs worse than the integer dot product version and doesn't support AMD yet. I'll look into fixing these problems.

@jeffbolznv You spotted a number of performance issues with my fp16 coopmat implentation last time, do you see any low-hanging fruit this time or is the way I implemented it just not good enough yet?

@jeffbolznv
Copy link
Collaborator

I think the biggest perf issue is the dynamic indexing to get the float factor in the muladd loop. But I didn't fully follow the code, so I don't have a specific suggestion for what to do about it. Maybe you can load the factors directly from shared memory into a matrix and do a componentwise matrix*matrix multiply?

@0cc4m
Copy link
Collaborator Author

0cc4m commented Mar 10, 2025

I think the biggest perf issue is the dynamic indexing to get the float factor in the muladd loop. But I didn't fully follow the code, so I don't have a specific suggestion for what to do about it. Maybe you can load the factors directly from shared memory into a matrix and do a componentwise matrix*matrix multiply?

That's a very good idea, I forgot that componentwise operations and conversions are possible. As long as I can convert the int32 matrix to the accumulator type, I should be able to run and store matrices in a similar way to the existing coopmat shader. I'll give it a shot.

@netrunnereve
Copy link
Collaborator

we probably also want to use DP4A and q8_1 for matrix-vector multiplication to reduce the memory bandwidth bottleneck.

Just a random thought but have you compared text generation speeds with FP16 B versus FP32 B on memory bound GPUs? This should give us some idea as to what improvements we can expect with Q8 matvec.

Meanwhile for AMD at least I noticed that they have this instruction for a FP16 dot product, which should finish in a single cycle:

V_DOT2_F32_F16
D.f32 = S0.f16[0] * S1.f16[0] + S0.f16[1] * S1.f16[1] + S2.f32

I think this can be triggered with a dot() instruction and it might help the old mul_mm shader until this PR gets merged.

@0cc4m
Copy link
Collaborator Author

0cc4m commented Mar 11, 2025

we probably also want to use DP4A and q8_1 for matrix-vector multiplication to reduce the memory bandwidth bottleneck.

Just a random thought but have you compared text generation speeds with FP16 B versus FP32 B on memory bound GPUs? This should give us some idea as to what improvements we can expect with Q8 matvec.

I have not looked into that yet, but I think there should be some improvement from the combination of reduced memory bandwidth requirement and simplified arithmetic using integer vec dot.

Meanwhile for AMD at least I noticed that they have this instruction for a FP16 dot product, which should finish in a single cycle:

V_DOT2_F32_F16
D.f32 = S0.f16[0] * S1.f16[0] + S0.f16[1] * S1.f16[1] + S2.f32

I think this can be triggered with a dot() instruction and it might help the old mul_mm shader until this PR gets merged.

Yeah, fp16 dot product. I have in the past tried to use it, without success. But that was so long ago that I'm not sure if my implementation was correct. Sadly, that instruction is (un)available on the same GPUs as the integer dot instructions.

Maybe the mul_mm shader can be refactored slightly to enable vdot, at least it would not need a separate implementation.

@0cc4m
Copy link
Collaborator Author

0cc4m commented Mar 17, 2025

I think the biggest perf issue is the dynamic indexing to get the float factor in the muladd loop. But I didn't fully follow the code, so I don't have a specific suggestion for what to do about it. Maybe you can load the factors directly from shared memory into a matrix and do a componentwise matrix*matrix multiply?

That's a very good idea, I forgot that componentwise operations and conversions are possible. As long as I can convert the int32 matrix to the accumulator type, I should be able to run and store matrices in a similar way to the existing coopmat shader. I'll give it a shot.

It's actually not simple to implement this, if I calculate and load the factors just before the calculations, it's just as slow as the current implementation. I would have to load them earlier, or this is not gonna work. I haven't been able to think of a good way of doing this yet.

If I can't think of one, I'll finish this PR without the coopmat implementation for now.

@0cc4m 0cc4m force-pushed the 0cc4m/vulkan-mmq-dp4a branch from 32bbd92 to 34ff5e1 Compare March 21, 2025 21:30
@0cc4m
Copy link
Collaborator Author

0cc4m commented Mar 21, 2025

I gave up on coopmat for now, so that I can finish the rest. Performance got a little better.

ggml_vulkan: 0 = AMD Radeon (TM) Pro VII (RADV VEGA20) (radv) | uma: 0 | fp16: 1 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: none

model size params backend ngl test t/s before t/s after t/s ROCm
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 pp512 318.39 ± 0.52 495.27 ± 2.83 1015.01 ± 1.31
llama 8B Q8_0 7.95 GiB 8.03 B Vulkan 99 pp512 307.08 ± 0.29 434.53 ± 1.59 398.00 ± 0.12

ggml_vulkan: 0 = AMD Radeon RX 6800 XT (RADV NAVI21) (radv) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 65536 | int dot: 1 | matrix cores: none

model size params backend ngl test t/s before t/s after t/s ROCm
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 pp512 919.51 ± 0.82 1319.91 ± 1.03 1679.58 ± 1.62
llama 8B Q8_0 7.95 GiB 8.03 B Vulkan 99 pp512 897.59 ± 1.34 1127.53 ± 2.47 1619.72 ± 1.72

ggml_vulkan: 0 = Intel(R) Arc(tm) A770 Graphics (DG2) (Intel open-source Mesa driver) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 65536 | int dot: 1 | matrix cores: none

model size params backend ngl test t/s before t/s after
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 pp512 103.36 ± 0.10 538.93 ± 2.49
llama 8B Q8_0 7.95 GiB 8.03 B Vulkan 99 pp512 97.22 ± 0.25 417.65 ± 2.04

ggml_vulkan: 0 = NVIDIA GeForce RTX 3090 (NVIDIA) | uma: 0 | fp16: 1 | warp size: 32 | shared memory: 49152 | int dot: 1 | matrix cores: none

model size params backend ngl test t/s before t/s after
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 pp512 1026.65 ± 2.48 1371.39 ± 2.45
llama 8B Q8_0 7.95 GiB 8.03 B Vulkan 99 pp512 999.94 ± 2.58 1181.85 ± 3.87

build: 4375415 (4938)

@jeffbolznv
Copy link
Collaborator

Hi @0cc4m,

I'm interested in trying out dp4a for the mat-vec mul shaders. Could you maybe split out and submit the code for quantizing to q8_1 to unblock that?

@0cc4m
Copy link
Collaborator Author

0cc4m commented Mar 26, 2025

I'll see what I can do, but splitting that out might be about as much work as finishing this PR. I should hopefully be able to get back to it within the next days.

@jeffbolznv
Copy link
Collaborator

No worries, I can just branch from here for now and I'll deal with any conflicts later.

@jeffbolznv
Copy link
Collaborator

I was able to get the q4_k matvecmul shader working based on this change. I didn't see a perf boost on Ada, but I need to try it on Ampere where fp32 is slower and I think there should be more upside.

@0cc4m
Copy link
Collaborator Author

0cc4m commented Mar 29, 2025

Here are some up-to-date results. While testing without fp16 support, I noticed that there must be a significant problem with the current mul_mm fp16 implementation, cause performance increases by a lot when disabling it and using fp32. I'll have to look into that.

I'll look into mul_mat_id, then this PR should be ready.


Radeon Pro VII:

model size params backend ngl test t/s fp16 + !int_dot (master) t/s fp32 + !int_dot t/s fp32 + int_dot t/s fp16 + int_dot (PR)
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 pp512 304.30 ± 0.22 393.14 ± 0.41 493.76 ± 2.33 492.39 ± 1.38

A770:

model size params backend ngl test t/s fp16 + !int_dot (master) t/s fp32 + !int_dot t/s fp32 + int_dot t/s fp16 + int_dot (PR)
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 pp512 165.41 ± 0.12 289.87 ± 0.22 572.16 ± 3.33 556.46 ± 2.91

RTX 3090:

model size params backend ngl test t/s fp16 + !int_dot (master) t/s fp32 + !int_dot t/s fp32 + int_dot t/s fp16 + int_dot (PR) t/s coopmat t/s coopmat2
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 pp512 1015.91 ± 2.50 1133.88 ± 1.63 1476.91 ± 4.08 1442.90 ± 5.24 3123.19 ± 18.21 4183.73 ± 65.14

@0cc4m 0cc4m marked this pull request as ready for review March 29, 2025 17:14
@0cc4m
Copy link
Collaborator Author

0cc4m commented Mar 29, 2025

MUL_MAT_ID is not straightforward, so I'm leaving it out as well. This should be ready now.

@0cc4m
Copy link
Collaborator Author

0cc4m commented Mar 30, 2025

RTX 3090:

model size params backend ngl test t/s fp16 t/s int dot t/s coopmat1 t/s coopmat2 t/s CUDA
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 pp512 1020.37 ± 2.42 2895.80 ± 21.75 3153.55 ± 36.62 4259.42 ± 19.80 5060.73 ± 11.11
llama 8B Q8_0 7.95 GiB 8.03 B Vulkan 99 pp512 995.09 ± 1.62 2855.98 ± 15.33 3145.50 ± 7.82 4259.25 ± 35.24 5072.68 ± 35.88

AMD Radeon RX 6800 XT:

model size params backend ngl test t/s Master (fp16) t/s int dot t/s ROCm
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 pp512 921.40 ± 1.32 1845.63 ± 8.48 1679.75 ± 1.57
llama 8B Q8_0 7.95 GiB 8.03 B Vulkan 99 pp512 900.20 ± 1.67 1467.22 ± 1.23 1620.08 ± 1.30

AMD Radeon Pro VII:

model size params backend ngl test t/s Master (fp16) t/s int dot t/s ROCm
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 pp512 310.47 ± 2.05 800.87 ± 1.91 1015.40 ± 0.50
llama 8B Q8_0 7.95 GiB 8.03 B Vulkan 99 pp512 310.49 ± 0.54 734.82 ± 1.58 398.21 ± 0.16

Intel A770:

model size params backend ngl test t/s Master (fp16) t/s int dot
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 pp512 165.43 ± 0.10 936.79 ± 2.47
llama 8B Q8_0 7.95 GiB 8.03 B Vulkan 99 pp512 157.75 ± 0.22 834.89 ± 0.86

@h9j6k
Copy link

h9j6k commented Mar 31, 2025

Hello, I am using a discrete card intel xe dg1(80eu)

when I run vulkaninfo |grep -i coop, it returns support available,

VkPhysicalDeviceCooperativeMatrixPropertiesKHR:
        cooperativeMatrixSupportedStages: count = 1
        VK_KHR_cooperative_matrix                     : extension revision 2
VkPhysicalDeviceCooperativeMatrixFeaturesKHR:
        cooperativeMatrix                   = true
        cooperativeMatrixRobustBufferAccess = false

but in this repo, ggml-vulkan.cpp code says intel driver does not work properly for coopmat.

So does that mean, even if this PR is merged, I won't benefit from it? Thanks.

@0cc4m
Copy link
Collaborator Author

0cc4m commented Mar 31, 2025

So does that mean, even if this PR is merged, I won't benefit from it? Thanks.

This PR does not require coopmat, no. I'm not familiar with that GPU, but as long as it supports accelerated integer dot product (DP4A), which as far as I know all Intel Xe GPUs do, it will benefit from this PR, yes.

@h9j6k
Copy link

h9j6k commented Mar 31, 2025

I'm not familiar with that GPU, as long as it supports accelerated integer dot product (DP4A), which as far as I know all Intel Xe GPUs do, it will benefit from this PR

Thanks. I just double checked VkPhysicalDeviceVulkan13Properties on my card, and some of its 8bits related accels seem to be available. Are those enough for this PR?

VkPhysicalDeviceVulkan13Properties:

minSubgroupSize                                                               = 8
maxSubgroupSize                                                               = 32
maxComputeWorkgroupSubgroups                                                  = 64
requiredSubgroupSizeStages: count = 3
	SHADER_STAGE_COMPUTE_BIT
	SHADER_STAGE_TASK_BIT_EXT
	SHADER_STAGE_MESH_BIT_EXT
maxInlineUniformBlockSize                                                     = 4096
maxPerStageDescriptorInlineUniformBlocks                                      = 32
maxPerStageDescriptorUpdateAfterBindInlineUniformBlocks                       = 32
maxDescriptorSetInlineUniformBlocks                                           = 32
maxDescriptorSetUpdateAfterBindInlineUniformBlocks                            = 32
maxInlineUniformTotalSize                                                     = 65535
integerDotProduct8BitUnsignedAccelerated                                      = false
integerDotProduct8BitSignedAccelerated                                        = false
integerDotProduct8BitMixedSignednessAccelerated                               = false
integerDotProduct4x8BitPackedUnsignedAccelerated                              = true
integerDotProduct4x8BitPackedSignedAccelerated                                = true
integerDotProduct4x8BitPackedMixedSignednessAccelerated                       = true
integerDotProduct16BitUnsignedAccelerated                                     = false
integerDotProduct16BitSignedAccelerated                                       = false
integerDotProduct16BitMixedSignednessAccelerated                              = false
integerDotProduct32BitUnsignedAccelerated                                     = false
integerDotProduct32BitSignedAccelerated                                       = false
integerDotProduct32BitMixedSignednessAccelerated                              = false
integerDotProduct64BitUnsignedAccelerated                                     = false
integerDotProduct64BitSignedAccelerated                                       = false
integerDotProduct64BitMixedSignednessAccelerated                              = false
integerDotProductAccumulatingSaturating8BitUnsignedAccelerated                = false
integerDotProductAccumulatingSaturating8BitSignedAccelerated                  = false
integerDotProductAccumulatingSaturating8BitMixedSignednessAccelerated         = false
integerDotProductAccumulatingSaturating4x8BitPackedUnsignedAccelerated        = true
integerDotProductAccumulatingSaturating4x8BitPackedSignedAccelerated          = true
integerDotProductAccumulatingSaturating4x8BitPackedMixedSignednessAccelerated = true
integerDotProductAccumulatingSaturating16BitUnsignedAccelerated               = false
integerDotProductAccumulatingSaturating16BitSignedAccelerated                 = false
integerDotProductAccumulatingSaturating16BitMixedSignednessAccelerated        = false
integerDotProductAccumulatingSaturating32BitUnsignedAccelerated               = false
integerDotProductAccumulatingSaturating32BitSignedAccelerated                 = false
integerDotProductAccumulatingSaturating32BitMixedSignednessAccelerated        = false
integerDotProductAccumulatingSaturating64BitUnsignedAccelerated               = false
integerDotProductAccumulatingSaturating64BitSignedAccelerated                 = false
integerDotProductAccumulatingSaturating64BitMixedSignednessAccelerated        = false
storageTexelBufferOffsetAlignmentBytes                                        = 0x00000010
storageTexelBufferOffsetSingleTexelAlignment                                  = true
uniformTexelBufferOffsetAlignmentBytes                                        = 0x00000001
uniformTexelBufferOffsetSingleTexelAlignment                                  = true
maxBufferSize                                                                 = 0x100000000

@0cc4m
Copy link
Collaborator Author

0cc4m commented Mar 31, 2025

Yeah, that's enough.

@0cc4m
Copy link
Collaborator Author

0cc4m commented Mar 31, 2025

Steam Deck APU:

model size params backend ngl test t/s Master (fp16) t/s int dot
llama 8B Q4_0 4.33 GiB 8.03 B Vulkan 99 pp512 74.77 ± 1.02 161.48 ± 0.44

@github-actions github-actions bot added the devops improvements to build systems and github actions label Mar 31, 2025
@0cc4m
Copy link
Collaborator Author

0cc4m commented Mar 31, 2025

I bumped the Windows Vulkan SDK version for the github build, so that it supports compiling the GLSL integer dot extension. The Ubuntu build already had it.

@0cc4m 0cc4m merged commit a8a1f33 into master Mar 31, 2025
51 of 52 checks passed
@0cc4m 0cc4m deleted the 0cc4m/vulkan-mmq-dp4a branch March 31, 2025 12:37
@0cc4m
Copy link
Collaborator Author

0cc4m commented Mar 31, 2025

@jeffbolznv How far did you get with DP4A mmv? Judging from these results it might be very good for specific AMD generations and for all Intel GPUs.

@jeffbolznv
Copy link
Collaborator

I ported q4_k to use dp4a, changes are at https://github.com/jeffbolznv/llama.cpp/tree/q4_k_int8 if you want to try it. I didn't see a meaningful improvement on RTX 3070 or 4070, it's maybe faster enough to pay for the overhead of the quantization but not enough to really help.

@easyfab
Copy link

easyfab commented Mar 31, 2025

Thank you for this commit. DP4A MMQ give a great speed for pp with q4-0 and intel A770.
Can it be extended in the future to all quant type ?

And for @jeffbolznv patch DP4A with q4_k it give a nice boost to tg with Intel A770.

Master
| llama 8B Q4_K - Medium         |   4.58 GiB |     8.03 B | Vulkan     |  99 |         tg128 |         19.08 ± 0.07 |
DP4A q4_k
| llama 8B Q4_K - Medium         |   4.58 GiB |     8.03 B | Vulkan     |  99 |         tg128 |         23.50 ± 0.11 |

Master
| llama 13B Q4_K - Small         |  12.61 GiB |    23.57 B | Vulkan     |  99 |         tg128 |          8.26 ± 0.01 |
DP4A q4_k
| llama 13B Q4_K - Small         |  12.61 GiB |    23.57 B | Vulkan     |  99 |         tg128 |         11.91 ± 0.03 |

@0cc4m
Copy link
Collaborator Author

0cc4m commented Apr 1, 2025

Thank you for this commit. DP4A MMQ give a great speed for pp with q4-0 and intel A770. Can it be extended in the future to all quant type ?

Yeah, it can be, it's just gonna take a little while to implement all of the repacking functions.

I'll look into k-quants and DP4A MMV soon.

@0cc4m
Copy link
Collaborator Author

0cc4m commented Apr 1, 2025

There seems to be an issue here with NaNs that I missed, that leads to incoherence. I'll look into it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
devops improvements to build systems and github actions ggml changes relating to the ggml tensor library for machine learning Vulkan Issues specific to the Vulkan backend
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants