Skip to content

[RISCV] Reduce the LMUL for a vrgather operation if legal #125768

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Feb 6, 2025

Conversation

preames
Copy link
Collaborator

@preames preames commented Feb 4, 2025

If we're lowering a shuffle to a vrgather (or vcompress), and we know that a prefix of the operation can be done while producing the same (defined) lanes, do the operation with a narrower LMUL.

If we're lowering a shuffle to a vrgather (or vcompress), and we know
that a prefix of the operation can be done while producing the same
(defined) lanes, do the operation with a narrower LMUL.
; CHECK-NEXT: vsetivli zero, 16, e8, m1, ta, ma
; CHECK-NEXT: vrgather.vv v12, v9, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
; CHECK-NEXT: vmerge.vvm v8, v10, v12, v0
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This exposes a problem with vselect lowering, and folding into a source whose tail is undefined, but I don't think this should be blocking for this patch.

@preames
Copy link
Collaborator Author

preames commented Feb 4, 2025

Small observation here: This is placed after the recognition of splat's (i.e. vrgather.vi/x). This does cause us to miss a few splat cases which could be done at lower LMUL, but I'm no sure how much we care since the splats are generally linear in LMUL anyways.

@llvmbot
Copy link
Member

llvmbot commented Feb 4, 2025

@llvm/pr-subscribers-backend-risc-v

Author: Philip Reames (preames)

Changes

If we're lowering a shuffle to a vrgather (or vcompress), and we know that a prefix of the operation can be done while producing the same (defined) lanes, do the operation with a narrower LMUL.


Patch is 23.18 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/125768.diff

2 Files Affected:

  • (modified) llvm/lib/Target/RISCV/RISCVISelLowering.cpp (+25)
  • (modified) llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-reverse.ll (+184-156)
diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
index 7c3b58389da28ee..a6cca57943de1f0 100644
--- a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
@@ -5619,6 +5619,31 @@ static SDValue lowerVECTOR_SHUFFLE(SDValue Op, SelectionDAG &DAG,
       }
     }
 
+    // If only a prefix of the source elements influence a prefix of the
+    // destination elements, try to see if we can reduce the required LMUL
+    unsigned MinVLen = Subtarget.getRealMinVLen();
+    unsigned MinVLMAX = MinVLen / VT.getScalarSizeInBits();
+    if (NumElts > MinVLMAX) {
+      unsigned MaxIdx = 0;
+      for (auto [I, M] : enumerate(Mask)) {
+        if (M == -1)
+          continue;
+        MaxIdx = std::max(std::max((unsigned)I,(unsigned)M), MaxIdx);
+      }
+      unsigned NewNumElts = NumElts;
+      while (MaxIdx < NewNumElts / 2 && NewNumElts != MinVLMAX)
+        NewNumElts /= 2;
+      if (NewNumElts != NumElts) {
+        MVT NewVT = MVT::getVectorVT(VT.getVectorElementType(), NewNumElts);
+        SDValue ZeroIdx = DAG.getVectorIdxConstant(0, DL);
+        V1 = DAG.getNode(ISD::EXTRACT_SUBVECTOR, DL, NewVT, V1, ZeroIdx);
+        SDValue Res = DAG.getVectorShuffle(NewVT, DL, V1, DAG.getUNDEF(NewVT),
+                                           Mask.take_front(NewNumElts));
+        return DAG.getNode(ISD::INSERT_SUBVECTOR, DL, VT, DAG.getUNDEF(VT),
+                           Res, ZeroIdx);
+      }
+    }
+
     // Before hitting generic lowering fallbacks, try to widen the mask
     // to a wider SEW.
     if (SDValue V = tryWidenMaskForShuffle(Op, DAG))
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-reverse.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-reverse.ll
index 5fd7e47507f71e4..69c101c79afbab6 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-reverse.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-reverse.ll
@@ -874,27 +874,30 @@ define <16 x i8> @reverse_v16i8_2(<8 x i8> %a, <8 x i8> %b) {
 define <32 x i8> @reverse_v32i8_2(<16 x i8> %a, <16 x i8> %b) {
 ; CHECK-LABEL: reverse_v32i8_2:
 ; CHECK:       # %bb.0:
-; CHECK-NEXT:    vsetvli a0, zero, e16, m2, ta, ma
-; CHECK-NEXT:    vmv1r.v v10, v9
 ; CHECK-NEXT:    csrr a0, vlenb
-; CHECK-NEXT:    vid.v v12
+; CHECK-NEXT:    vsetvli a1, zero, e16, m2, ta, ma
+; CHECK-NEXT:    vid.v v10
 ; CHECK-NEXT:    addi a1, a0, -1
-; CHECK-NEXT:    vrsub.vx v12, v12, a1
+; CHECK-NEXT:    vrsub.vx v10, v10, a1
 ; CHECK-NEXT:    lui a1, 16
 ; CHECK-NEXT:    addi a1, a1, -1
 ; CHECK-NEXT:    vsetvli zero, zero, e8, m1, ta, ma
-; CHECK-NEXT:    vrgatherei16.vv v15, v8, v12
-; CHECK-NEXT:    vrgatherei16.vv v14, v9, v12
+; CHECK-NEXT:    vrgatherei16.vv v15, v8, v10
+; CHECK-NEXT:    vrgatherei16.vv v14, v12, v10
 ; CHECK-NEXT:    vsetvli zero, zero, e32, m4, ta, ma
 ; CHECK-NEXT:    vmv.s.x v0, a1
 ; CHECK-NEXT:    li a1, 32
-; CHECK-NEXT:    slli a0, a0, 1
-; CHECK-NEXT:    vsetvli zero, a1, e8, m2, ta, mu
+; CHECK-NEXT:    vsetivli zero, 16, e8, m1, ta, ma
 ; CHECK-NEXT:    vid.v v8
+; CHECK-NEXT:    slli a0, a0, 1
+; CHECK-NEXT:    vrsub.vi v8, v8, 15
 ; CHECK-NEXT:    addi a0, a0, -32
-; CHECK-NEXT:    vrsub.vi v12, v8, 15
-; CHECK-NEXT:    vslidedown.vx v8, v14, a0
-; CHECK-NEXT:    vrgather.vv v8, v10, v12, v0.t
+; CHECK-NEXT:    vsetvli zero, a1, e8, m2, ta, ma
+; CHECK-NEXT:    vslidedown.vx v10, v14, a0
+; CHECK-NEXT:    vsetivli zero, 16, e8, m1, ta, ma
+; CHECK-NEXT:    vrgather.vv v12, v9, v8
+; CHECK-NEXT:    vsetvli zero, a1, e8, m2, ta, ma
+; CHECK-NEXT:    vmerge.vvm v8, v10, v12, v0
 ; CHECK-NEXT:    ret
   %res = shufflevector <16 x i8> %a, <16 x i8> %b,  <32 x i32> <i32 31, i32 30, i32 29, i32 28, i32 27, i32 26, i32 25, i32 24, i32 23, i32 22, i32 21, i32 20, i32 19, i32 18, i32 17, i32 16, i32 15, i32 14, i32 13, i32 12, i32 11, i32 10, i32 9, i32 8, i32 7, i32 6, i32 5, i32 4, i32 3, i32 2, i32 1, i32 0>
   ret <32 x i8> %res
@@ -943,23 +946,26 @@ define <8 x i16> @reverse_v8i16_2(<4 x i16> %a, <4 x i16> %b) {
 define <16 x i16> @reverse_v16i16_2(<8 x i16> %a, <8 x i16> %b) {
 ; CHECK-LABEL: reverse_v16i16_2:
 ; CHECK:       # %bb.0:
-; CHECK-NEXT:    vsetvli a0, zero, e16, m1, ta, ma
-; CHECK-NEXT:    vmv1r.v v10, v9
 ; CHECK-NEXT:    csrr a0, vlenb
-; CHECK-NEXT:    vid.v v9
+; CHECK-NEXT:    vsetvli a1, zero, e16, m1, ta, ma
+; CHECK-NEXT:    vid.v v10
 ; CHECK-NEXT:    srli a1, a0, 1
 ; CHECK-NEXT:    addi a1, a1, -1
-; CHECK-NEXT:    vrsub.vx v9, v9, a1
-; CHECK-NEXT:    vrgather.vv v13, v8, v9
-; CHECK-NEXT:    vrgather.vv v12, v11, v9
-; CHECK-NEXT:    vsetivli zero, 16, e16, m2, ta, mu
+; CHECK-NEXT:    vrsub.vx v10, v10, a1
+; CHECK-NEXT:    vrgather.vv v13, v8, v10
+; CHECK-NEXT:    vrgather.vv v12, v11, v10
+; CHECK-NEXT:    vsetivli zero, 8, e16, m1, ta, ma
 ; CHECK-NEXT:    vid.v v8
 ; CHECK-NEXT:    li a1, 255
 ; CHECK-NEXT:    addi a0, a0, -16
-; CHECK-NEXT:    vrsub.vi v14, v8, 7
+; CHECK-NEXT:    vrsub.vi v8, v8, 7
 ; CHECK-NEXT:    vmv.s.x v0, a1
-; CHECK-NEXT:    vslidedown.vx v8, v12, a0
-; CHECK-NEXT:    vrgather.vv v8, v10, v14, v0.t
+; CHECK-NEXT:    vsetivli zero, 16, e16, m2, ta, ma
+; CHECK-NEXT:    vslidedown.vx v10, v12, a0
+; CHECK-NEXT:    vsetivli zero, 8, e16, m1, ta, ma
+; CHECK-NEXT:    vrgather.vv v12, v9, v8
+; CHECK-NEXT:    vsetivli zero, 16, e16, m2, ta, ma
+; CHECK-NEXT:    vmerge.vvm v8, v10, v12, v0
 ; CHECK-NEXT:    ret
   %res = shufflevector <8 x i16> %a, <8 x i16> %b,  <16 x i32> <i32 15, i32 14, i32 13, i32 12, i32 11, i32 10, i32 9, i32 8, i32 7, i32 6, i32 5, i32 4, i32 3, i32 2, i32 1, i32 0>
   ret <16 x i16> %res
@@ -968,30 +974,33 @@ define <16 x i16> @reverse_v16i16_2(<8 x i16> %a, <8 x i16> %b) {
 define <32 x i16> @reverse_v32i16_2(<16 x i16> %a, <16 x i16> %b) {
 ; CHECK-LABEL: reverse_v32i16_2:
 ; CHECK:       # %bb.0:
-; CHECK-NEXT:    vsetvli a0, zero, e16, m1, ta, ma
-; CHECK-NEXT:    vmv2r.v v12, v10
 ; CHECK-NEXT:    csrr a0, vlenb
-; CHECK-NEXT:    vid.v v10
-; CHECK-NEXT:    lui a1, 16
-; CHECK-NEXT:    addi a1, a1, -1
-; CHECK-NEXT:    vsetvli zero, zero, e32, m2, ta, ma
-; CHECK-NEXT:    vmv.s.x v0, a1
+; CHECK-NEXT:    vsetvli a1, zero, e16, m1, ta, ma
+; CHECK-NEXT:    vid.v v12
 ; CHECK-NEXT:    srli a1, a0, 1
 ; CHECK-NEXT:    addi a1, a1, -1
-; CHECK-NEXT:    vsetvli zero, zero, e16, m1, ta, ma
-; CHECK-NEXT:    vrsub.vx v10, v10, a1
+; CHECK-NEXT:    vrsub.vx v13, v12, a1
+; CHECK-NEXT:    vrgather.vv v15, v8, v13
+; CHECK-NEXT:    vrgather.vv v14, v9, v13
+; CHECK-NEXT:    vrgather.vv v9, v10, v13
+; CHECK-NEXT:    vrgather.vv v8, v11, v13
 ; CHECK-NEXT:    li a1, 32
-; CHECK-NEXT:    slli a0, a0, 1
-; CHECK-NEXT:    vrgather.vv v19, v8, v10
-; CHECK-NEXT:    vrgather.vv v18, v9, v10
-; CHECK-NEXT:    vrgather.vv v16, v11, v10
-; CHECK-NEXT:    vsetvli zero, a1, e16, m4, ta, mu
-; CHECK-NEXT:    vid.v v8
-; CHECK-NEXT:    addi a0, a0, -32
-; CHECK-NEXT:    vrsub.vi v20, v8, 15
-; CHECK-NEXT:    vmv1r.v v17, v16
-; CHECK-NEXT:    vslidedown.vx v8, v16, a0
-; CHECK-NEXT:    vrgather.vv v8, v12, v20, v0.t
+; CHECK-NEXT:    lui a2, 16
+; CHECK-NEXT:    addi a2, a2, -1
+; CHECK-NEXT:    vsetvli zero, zero, e32, m2, ta, ma
+; CHECK-NEXT:    vmv.s.x v0, a2
+; CHECK-NEXT:    slli a2, a0, 1
+; CHECK-NEXT:    addi a0, a0, -16
+; CHECK-NEXT:    vsetvli zero, zero, e16, m1, ta, ma
+; CHECK-NEXT:    vrgather.vv v12, v10, v13
+; CHECK-NEXT:    addi a2, a2, -32
+; CHECK-NEXT:    vmv.v.v v13, v12
+; CHECK-NEXT:    vsetvli zero, a1, e16, m4, ta, ma
+; CHECK-NEXT:    vslidedown.vx v12, v12, a2
+; CHECK-NEXT:    vsetivli zero, 16, e16, m2, ta, ma
+; CHECK-NEXT:    vslidedown.vx v8, v8, a0
+; CHECK-NEXT:    vsetvli zero, a1, e16, m4, ta, ma
+; CHECK-NEXT:    vmerge.vvm v8, v12, v8, v0
 ; CHECK-NEXT:    ret
   %res = shufflevector <16 x i16> %a, <16 x i16> %b,  <32 x i32> <i32 31, i32 30, i32 29, i32 28, i32 27, i32 26, i32 25, i32 24, i32 23, i32 22, i32 21, i32 20, i32 19, i32 18, i32 17, i32 16, i32 15, i32 14, i32 13, i32 12, i32 11, i32 10, i32 9, i32 8, i32 7, i32 6, i32 5, i32 4, i32 3, i32 2, i32 1, i32 0>
   ret <32 x i16> %res
@@ -1024,24 +1033,26 @@ define <4 x i32> @reverse_v4i32_2(<2 x i32> %a, < 2 x i32> %b) {
 define <8 x i32> @reverse_v8i32_2(<4 x i32> %a, <4 x i32> %b) {
 ; CHECK-LABEL: reverse_v8i32_2:
 ; CHECK:       # %bb.0:
-; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
-; CHECK-NEXT:    vmv1r.v v10, v9
 ; CHECK-NEXT:    csrr a0, vlenb
-; CHECK-NEXT:    vid.v v9
+; CHECK-NEXT:    vsetvli a1, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vid.v v10
 ; CHECK-NEXT:    srli a1, a0, 2
 ; CHECK-NEXT:    addi a1, a1, -1
-; CHECK-NEXT:    vrsub.vx v9, v9, a1
-; CHECK-NEXT:    vrgather.vv v13, v8, v9
-; CHECK-NEXT:    vrgather.vv v12, v11, v9
-; CHECK-NEXT:    vsetivli zero, 8, e16, m1, ta, ma
+; CHECK-NEXT:    vrsub.vx v10, v10, a1
+; CHECK-NEXT:    vrgather.vv v13, v8, v10
+; CHECK-NEXT:    vrgather.vv v12, v11, v10
+; CHECK-NEXT:    vsetivli zero, 4, e32, m1, ta, ma
 ; CHECK-NEXT:    vid.v v8
 ; CHECK-NEXT:    vmv.v.i v0, 15
 ; CHECK-NEXT:    srli a0, a0, 1
-; CHECK-NEXT:    vrsub.vi v14, v8, 3
+; CHECK-NEXT:    vrsub.vi v8, v8, 3
 ; CHECK-NEXT:    addi a0, a0, -8
-; CHECK-NEXT:    vsetvli zero, zero, e32, m2, ta, mu
-; CHECK-NEXT:    vslidedown.vx v8, v12, a0
-; CHECK-NEXT:    vrgatherei16.vv v8, v10, v14, v0.t
+; CHECK-NEXT:    vsetivli zero, 8, e32, m2, ta, ma
+; CHECK-NEXT:    vslidedown.vx v10, v12, a0
+; CHECK-NEXT:    vsetivli zero, 4, e32, m1, ta, ma
+; CHECK-NEXT:    vrgather.vv v12, v9, v8
+; CHECK-NEXT:    vsetivli zero, 8, e32, m2, ta, ma
+; CHECK-NEXT:    vmerge.vvm v8, v10, v12, v0
 ; CHECK-NEXT:    ret
   %res = shufflevector <4 x i32> %a, <4 x i32> %b, <8 x i32> <i32 7, i32 6, i32 5, i32 4, i32 3, i32 2, i32 1, i32 0>
   ret <8 x i32> %res
@@ -1050,26 +1061,29 @@ define <8 x i32> @reverse_v8i32_2(<4 x i32> %a, <4 x i32> %b) {
 define <16 x i32> @reverse_v16i32_2(<8 x i32> %a, <8 x i32> %b) {
 ; CHECK-LABEL: reverse_v16i32_2:
 ; CHECK:       # %bb.0:
-; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
-; CHECK-NEXT:    vmv2r.v v12, v10
 ; CHECK-NEXT:    csrr a0, vlenb
-; CHECK-NEXT:    vid.v v10
+; CHECK-NEXT:    vsetvli a1, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vid.v v12
 ; CHECK-NEXT:    srli a1, a0, 2
 ; CHECK-NEXT:    addi a1, a1, -1
-; CHECK-NEXT:    vrsub.vx v14, v10, a1
-; CHECK-NEXT:    vrgather.vv v11, v8, v14
-; CHECK-NEXT:    vrgather.vv v10, v9, v14
-; CHECK-NEXT:    vrgather.vv v8, v9, v14
-; CHECK-NEXT:    vsetivli zero, 16, e16, m2, ta, ma
-; CHECK-NEXT:    vid.v v14
+; CHECK-NEXT:    vrsub.vx v13, v12, a1
+; CHECK-NEXT:    vrgather.vv v15, v8, v13
+; CHECK-NEXT:    vrgather.vv v14, v9, v13
+; CHECK-NEXT:    vrgather.vv v9, v10, v13
+; CHECK-NEXT:    vrgather.vv v8, v11, v13
 ; CHECK-NEXT:    li a1, 255
-; CHECK-NEXT:    addi a0, a0, -16
-; CHECK-NEXT:    vrsub.vi v16, v14, 7
 ; CHECK-NEXT:    vmv.s.x v0, a1
-; CHECK-NEXT:    vmv1r.v v9, v8
-; CHECK-NEXT:    vsetvli zero, zero, e32, m4, ta, mu
+; CHECK-NEXT:    addi a1, a0, -16
+; CHECK-NEXT:    srli a0, a0, 1
+; CHECK-NEXT:    vrgather.vv v12, v10, v13
+; CHECK-NEXT:    addi a0, a0, -8
+; CHECK-NEXT:    vmv.v.v v13, v12
+; CHECK-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
+; CHECK-NEXT:    vslidedown.vx v12, v12, a1
+; CHECK-NEXT:    vsetivli zero, 8, e32, m2, ta, ma
 ; CHECK-NEXT:    vslidedown.vx v8, v8, a0
-; CHECK-NEXT:    vrgatherei16.vv v8, v12, v16, v0.t
+; CHECK-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
+; CHECK-NEXT:    vmerge.vvm v8, v12, v8, v0
 ; CHECK-NEXT:    ret
   %res = shufflevector <8 x i32> %a, <8 x i32> %b,  <16 x i32> <i32 15, i32 14, i32 13, i32 12, i32 11, i32 10, i32 9, i32 8, i32 7, i32 6, i32 5, i32 4, i32 3, i32 2, i32 1, i32 0>
   ret <16 x i32> %res
@@ -1078,32 +1092,36 @@ define <16 x i32> @reverse_v16i32_2(<8 x i32> %a, <8 x i32> %b) {
 define <32 x i32> @reverse_v32i32_2(<16 x i32> %a, <16 x i32> %b) {
 ; CHECK-LABEL: reverse_v32i32_2:
 ; CHECK:       # %bb.0:
-; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
-; CHECK-NEXT:    vmv4r.v v16, v12
 ; CHECK-NEXT:    csrr a0, vlenb
-; CHECK-NEXT:    vid.v v12
+; CHECK-NEXT:    vsetvli a1, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vid.v v16
 ; CHECK-NEXT:    srli a1, a0, 2
 ; CHECK-NEXT:    addi a1, a1, -1
-; CHECK-NEXT:    vrsub.vx v20, v12, a1
-; CHECK-NEXT:    vrgather.vv v15, v8, v20
-; CHECK-NEXT:    vrgather.vv v14, v9, v20
-; CHECK-NEXT:    vrgather.vv v13, v10, v20
-; CHECK-NEXT:    vrgather.vv v12, v11, v20
-; CHECK-NEXT:    lui a1, 16
-; CHECK-NEXT:    addi a1, a1, -1
-; CHECK-NEXT:    vmv.s.x v0, a1
+; CHECK-NEXT:    vrsub.vx v17, v16, a1
+; CHECK-NEXT:    vrgather.vv v23, v8, v17
+; CHECK-NEXT:    vrgather.vv v22, v9, v17
+; CHECK-NEXT:    vrgather.vv v21, v10, v17
+; CHECK-NEXT:    vrgather.vv v20, v11, v17
+; CHECK-NEXT:    vrgather.vv v11, v12, v17
+; CHECK-NEXT:    vrgather.vv v10, v13, v17
+; CHECK-NEXT:    vrgather.vv v9, v14, v17
+; CHECK-NEXT:    vrgather.vv v8, v15, v17
 ; CHECK-NEXT:    li a1, 32
-; CHECK-NEXT:    slli a0, a0, 1
-; CHECK-NEXT:    vrgather.vv v8, v9, v20
-; CHECK-NEXT:    vsetvli zero, a1, e16, m4, ta, ma
-; CHECK-NEXT:    vid.v v20
-; CHECK-NEXT:    addi a0, a0, -32
-; CHECK-NEXT:    vmv1r.v v9, v8
-; CHECK-NEXT:    vrsub.vi v24, v20, 15
-; CHECK-NEXT:    vmv2r.v v10, v8
-; CHECK-NEXT:    vsetvli zero, zero, e32, m8, ta, mu
+; CHECK-NEXT:    lui a2, 16
+; CHECK-NEXT:    addi a2, a2, -1
+; CHECK-NEXT:    vmv.s.x v0, a2
+; CHECK-NEXT:    slli a2, a0, 1
+; CHECK-NEXT:    vrgather.vv v16, v12, v17
+; CHECK-NEXT:    addi a0, a0, -16
+; CHECK-NEXT:    vmv.v.v v17, v16
+; CHECK-NEXT:    addi a2, a2, -32
+; CHECK-NEXT:    vmv2r.v v18, v16
+; CHECK-NEXT:    vsetvli zero, a1, e32, m8, ta, ma
+; CHECK-NEXT:    vslidedown.vx v16, v16, a2
+; CHECK-NEXT:    vsetivli zero, 16, e32, m4, ta, ma
 ; CHECK-NEXT:    vslidedown.vx v8, v8, a0
-; CHECK-NEXT:    vrgatherei16.vv v8, v16, v24, v0.t
+; CHECK-NEXT:    vsetvli zero, a1, e32, m8, ta, ma
+; CHECK-NEXT:    vmerge.vvm v8, v16, v8, v0
 ; CHECK-NEXT:    ret
   %res = shufflevector <16 x i32> %a, <16 x i32> %b,  <32 x i32> <i32 31, i32 30, i32 29, i32 28, i32 27, i32 26, i32 25, i32 24, i32 23, i32 22, i32 21, i32 20, i32 19, i32 18, i32 17, i32 16, i32 15, i32 14, i32 13, i32 12, i32 11, i32 10, i32 9, i32 8, i32 7, i32 6, i32 5, i32 4, i32 3, i32 2, i32 1, i32 0>
   ret <32 x i32> %res
@@ -1127,28 +1145,29 @@ define <4 x i64> @reverse_v4i64_2(<2 x i64> %a, < 2 x i64> %b) {
 define <8 x i64> @reverse_v8i64_2(<4 x i64> %a, <4 x i64> %b) {
 ; CHECK-LABEL: reverse_v8i64_2:
 ; CHECK:       # %bb.0:
-; CHECK-NEXT:    vsetvli a0, zero, e64, m1, ta, ma
-; CHECK-NEXT:    vmv2r.v v12, v10
 ; CHECK-NEXT:    csrr a0, vlenb
-; CHECK-NEXT:    vid.v v10
+; CHECK-NEXT:    vsetvli a1, zero, e64, m1, ta, ma
+; CHECK-NEXT:    vid.v v12
 ; CHECK-NEXT:    srli a1, a0, 3
 ; CHECK-NEXT:    addi a1, a1, -1
-; CHECK-NEXT:    vrsub.vx v14, v10, a1
-; CHECK-NEXT:    vrgather.vv v11, v8, v14
-; CHECK-NEXT:    vrgather.vv v10, v9, v14
-; CHECK-NEXT:    vsetivli zero, 8, e16, m1, ta, ma
-; CHECK-NEXT:    vid.v v15
-; CHECK-NEXT:    vsetvli a1, zero, e64, m1, ta, ma
-; CHECK-NEXT:    vrgather.vv v8, v9, v14
+; CHECK-NEXT:    vrsub.vx v13, v12, a1
+; CHECK-NEXT:    vrgather.vv v15, v8, v13
+; CHECK-NEXT:    vrgather.vv v14, v9, v13
+; CHECK-NEXT:    vrgather.vv v9, v10, v13
+; CHECK-NEXT:    vrgather.vv v8, v11, v13
 ; CHECK-NEXT:    vmv.v.i v0, 15
-; CHECK-NEXT:    srli a0, a0, 1
-; CHECK-NEXT:    vsetivli zero, 8, e16, m1, ta, ma
-; CHECK-NEXT:    vrsub.vi v16, v15, 3
-; CHECK-NEXT:    addi a0, a0, -8
-; CHECK-NEXT:    vmv1r.v v9, v8
-; CHECK-NEXT:    vsetvli zero, zero, e64, m4, ta, mu
+; CHECK-NEXT:    srli a1, a0, 1
+; CHECK-NEXT:    srli a0, a0, 2
+; CHECK-NEXT:    addi a1, a1, -8
+; CHECK-NEXT:    vrgather.vv v12, v10, v13
+; CHECK-NEXT:    addi a0, a0, -4
+; CHECK-NEXT:    vmv.v.v v13, v12
+; CHECK-NEXT:    vsetivli zero, 8, e64, m4, ta, ma
+; CHECK-NEXT:    vslidedown.vx v12, v12, a1
+; CHECK-NEXT:    vsetivli zero, 4, e64, m2, ta, ma
 ; CHECK-NEXT:    vslidedown.vx v8, v8, a0
-; CHECK-NEXT:    vrgatherei16.vv v8, v12, v16, v0.t
+; CHECK-NEXT:    vsetivli zero, 8, e64, m4, ta, ma
+; CHECK-NEXT:    vmerge.vvm v8, v12, v8, v0
 ; CHECK-NEXT:    ret
   %res = shufflevector <4 x i64> %a, <4 x i64> %b, <8 x i32> <i32 7, i32 6, i32 5, i32 4, i32 3, i32 2, i32 1, i32 0>
   ret <8 x i64> %res
@@ -1197,23 +1216,26 @@ define <8 x half> @reverse_v8f16_2(<4 x half> %a, <4 x half> %b) {
 define <16 x half> @reverse_v16f16_2(<8 x half> %a, <8 x half> %b) {
 ; CHECK-LABEL: reverse_v16f16_2:
 ; CHECK:       # %bb.0:
-; CHECK-NEXT:    vsetvli a0, zero, e16, m1, ta, ma
-; CHECK-NEXT:    vmv1r.v v10, v9
 ; CHECK-NEXT:    csrr a0, vlenb
-; CHECK-NEXT:    vid.v v9
+; CHECK-NEXT:    vsetvli a1, zero, e16, m1, ta, ma
+; CHECK-NEXT:    vid.v v10
 ; CHECK-NEXT:    srli a1, a0, 1
 ; CHECK-NEXT:    addi a1, a1, -1
-; CHECK-NEXT:    vrsub.vx v9, v9, a1
-; CHECK-NEXT:    vrgather.vv v13, v8, v9
-; CHECK-NEXT:    vrgather.vv v12, v11, v9
-; CHECK-NEXT:    vsetivli zero, 16, e16, m2, ta, mu
+; CHECK-NEXT:    vrsub.vx v10, v10, a1
+; CHECK-NEXT:    vrgather.vv v13, v8, v10
+; CHECK-NEXT:    vrgather.vv v12, v11, v10
+; CHECK-NEXT:    vsetivli zero, 8, e16, m1, ta, ma
 ; CHECK-NEXT:    vid.v v8
 ; CHECK-NEXT:    li a1, 255
 ; CHECK-NEXT:    addi a0, a0, -16
-; CHECK-NEXT:    vrsub.vi v14, v8, 7
+; CHECK-NEXT:    vrsub.vi v8, v8, 7
 ; CHECK-NEXT:    vmv.s.x v0, a1
-; CHECK-NEXT:    vslidedown.vx v8, v12, a0
-; CHECK-NEXT:    vrgather.vv v8, v10, v14, v0.t
+; CHECK-NEXT:    vsetivli zero, 16, e16, m2, ta, ma
+; CHECK-NEXT:    vslidedown.vx v10, v12, a0
+; CHECK-NEXT:    vsetivli zero, 8, e16, m1, ta, ma
+; CHECK-NEXT:    vrgather.vv v12, v9, v8
+; CHECK-NEXT:    vsetivli zero, 16, e16, m2, ta, ma
+; CHECK-NEXT:    vmerge.vvm v8, v10, v12, v0
 ; CHECK-NEXT:    ret
   %res = shufflevector <8 x half> %a, <8 x half> %b,  <16 x i32> <i32 15, i32 14, i32 13, i32 12, i32 11, i32 10, i32 9, i32 8, i32 7, i32 6, i32 5, i32 4, i32 3, i32 2, i32 1, i32 0>
   ret <16 x half> %res
@@ -1269,24 +1291,26 @@ define <4 x float> @reverse_v4f32_2(<2 x float> %a, <2 x float> %b) {
 define <8 x float> @reverse_v8f32_2(<4 x float> %a, <4 x float> %b) {
 ; CHECK-LABEL: reverse_v8f32_2:
 ; CHECK:       # %bb.0:
-; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
-; CHECK-NEXT:    vmv1r.v v10, v9
 ; CHECK-NEXT:    csrr a0, vlenb
-; CHECK-NEXT:    vid.v v9
+; CHECK-NEXT:    vsetvli a1, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vid.v v10
 ; CHECK-NEXT:    srli a1, a0, 2
 ; CHECK-NEXT:    addi a1, a1, -1
-; CHECK-NEXT:    vrsub.vx v9, v9, a1
-; CHECK-NEXT:    vrgather.vv v13, v8, v9
-; CHECK-NEXT:    vrgather.vv v12, v11, v9
-; CHECK-NEXT:    vsetivli zero, 8, e16, m1, ta, ma
+; CHECK-NEXT:    vrsub.vx v10, v10, a1
+; CHECK-NEXT:    vrgather.vv v13, v8, v10
+; CHECK-NEXT:    vrgather.vv v12, v11, v10
+; CHECK-NEXT:    vsetivli zero, 4, e32, m1, ta, ma
 ; CHECK-NEXT:    vid.v v8
 ; CHECK-NEXT:    vmv.v.i v0, 15
 ; CHECK-NEXT:    srli a0, a0, 1
-; CHECK-NEXT:    vrsub.vi v14, v8, 3
+; CHECK-NEXT:    vrsub.vi v8, v8, 3
 ; CHECK-NEXT:    addi a0, a0, -8
-; CHECK-NEXT:    vsetvli zero, zero, e32, m2, ta, mu
-; CHECK-NEXT:    vslidedown.vx v8, v12, a0
-; CHECK-NEXT:    vrgatherei16.vv v8, v10, v14, v0.t
+; CHECK-NEXT:    vsetivli zero, 8, e32, m2, ta, ma
+; CHECK-NEXT:    vslidedown.vx v10, v12, a0
+; CHECK-NEXT:    vsetivli zero, 4, e32, m1, ta, ma
+; CHECK-NEXT:    vrgather.vv v12, v9, v8
+; CHECK-NEXT:    vsetivli zero, 8, e32, m2, ta, ma
+; CHECK-NEXT:    vmerge.vvm v8, v10, v12, v0
 ; CHECK-NEXT:    ret
   %res = shufflevector <4 x float> %a, <4 x float> %b, <8 x i32> <i32 7, i32 6, i32 5, i32 4, i32 3, i32 2, i32 1, i32 0>
   ret <8 x float> %res
@@ -1295,26 +1319,29 @@ define <8 x float> @reverse_v8f32_2(<4 x float> %a, <4 x float> %b) {
 define <16 x float> @reverse_v16f32_2(<8 x float> %a, <8 x float> %b) {
 ; CHECK-LABEL: reverse_v16f32_2:
 ; CHECK:       # %bb.0:
-; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
-; CHECK-NEXT:    vmv2r.v v12, v10
 ; CHECK-NEXT:    csrr a0, vlenb
-; CHECK-NEXT:    vid.v v10
+; CHECK-NEXT:    vsetvli a1, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vid.v v12
 ; CHECK-NEXT:    srli a1, a0, 2
 ; CHECK-NEXT:    addi a1, a1, -1
-; CHECK-NEXT:    vrsub.vx v14, v10, a1
-; CHECK-NEXT:    vrgather.vv v11, v8, v14
-; CHECK-NEXT:    vrgather.vv v10, v9, v14
-; CHECK-NEXT:    vrgather.vv v8, v9, v14
-; CHECK-NEXT:    vsetivli zero, 16, e16, m2, ta, ma
-; CHEC...
[truncated]

Copy link

github-actions bot commented Feb 4, 2025

✅ With the latest revision this PR passed the C/C++ code formatter.

@preames
Copy link
Collaborator Author

preames commented Feb 4, 2025

Missed a test update, update pending.

; CHECK-NEXT: vrgatherei16.vv v15, v8, v12
; CHECK-NEXT: vrgatherei16.vv v14, v9, v12
; CHECK-NEXT: vrgatherei16.vv v15, v8, v10
; CHECK-NEXT: vrgatherei16.vv v14, v12, v10
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you aware that v12 here is undefined?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't, and had a moment of panic as this was also in the other change I landed last night, but this is correct.

We're lowering a shuffle <undef, undef, undef, undef, 3, 2,1,0> here. This is recognized as a reverse, but the high lanes of the source are undefined and undemanded. This has been true the whole time, but the perturbation of register allocation in this patch happens to cause them to pick a different register.

We could arguably do a better lowering here - e.g. reverse m1 then slide - but doing that will only save a linear number of m1 vrgathers at best.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like #125949 also fixes this?

Copy link
Collaborator Author

@preames preames Feb 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More accidentally than anything else. If you do a <undef, undef, undef, undef, 3, 2,1,0> single source shuffle (as opposed to the two argument reverse being split), we'll still see an undefined source register.

I added a test in ed8a6d which hits this case, but because of register allocation it's not really obvious.

; CHECK-NEXT: vsetivli zero, 16, e8, m1, ta, ma
; CHECK-NEXT: vrgather.vv v12, v9, v8
; CHECK-NEXT: vsetvli zero, a1, e8, m2, ta, ma
; CHECK-NEXT: vmerge.vvm v8, v10, v12, v0
; CHECK-NEXT: ret
%res = shufflevector <16 x i8> %a, <16 x i8> %b, <32 x i32> <i32 31, i32 30, i32 29, i32 28, i32 27, i32 26, i32 25, i32 24, i32 23, i32 22, i32 21, i32 20, i32 19, i32 18, i32 17, i32 16, i32 15, i32 14, i32 13, i32 12, i32 11, i32 10, i32 9, i32 8, i32 7, i32 6, i32 5, i32 4, i32 3, i32 2, i32 1, i32 0>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why aren't we able to do the single source version of the reverse lowering here, which is just LMUL * M1 vrgathers and a single vslidedown? E.g. see earlier in the tests:

reverse_v8i32:                     
# %bb.0:                           
  csrr a0, vlenb                   
  vsetvli a1, zero, e32, m1, ta, ma
  vid.v v10                        
  srli a1, a0, 2                   
  srli a0, a0, 1                   
  addi a1, a1, -1                  
  vrsub.vx v10, v10, a1            
  vrgather.vv v13, v8, v10         
  vrgather.vv v12, v9, v10         
  addi a0, a0, -8                  
  vsetivli zero, 8, e32, m2, ta, ma
  vslidedown.vx v8, v12, a0        
  ret

It looks like the reverse mask is getting all messed up when the shufflevector goes from IR to SDAG:

        t9: v32i8 = concat_vectors t4, undef:v16i8
        t10: v32i8 = concat_vectors t7, undef:v16i8
      t11: v32i8 = vector_shuffle<47,46,45,44,43,42,41,40,39,38,37,36,35,34,33,32,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0> t9, t10

Would we get better lowering if we combine this to

        t10: v32i8 = concat_vectors t4, t7
      t11: v32i8 = vector_shuffle<31,30,29,28,27,26,25,24,23,22,21,20,19,18,17,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1,0> t0, undef

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that in general, the two sources are unrelated register groups. So the concat is also an O(LMUL) operation. Given that, the single source reverse mapping is in the same rough ballpark as the two independent reverse and merge form

However, you are right there's something interesting here. If I add the reverse case to isShuffleMaskLegal, I get a much improved lowering for these. Specifically, we reverse each source and then concat.

I'm going to post a separate patch for that, good catch.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW I also tried out adding reverses to isShuffleMaskLegal but noticed that the generic dag combine that kicks in (foldShuffleOfConcatUndefs) ends up creating a tree of vslidedowns with a final vslideup, rather than just one wide vslidedown

I didn't have any great ideas on how to fix it so stopped there, but agreed it seems worth exploring

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See #125949

Copy link
Collaborator

@topperc topperc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

; V-NEXT: vsetivli zero, 1, e64, m1, ta, ma
; V-NEXT: vmv.s.x v16, a0
; V-NEXT: vsetivli zero, 16, e32, m2, ta, ma
; V-NEXT: vnsrl.wi v16, v8, 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a nice win/way to workaround the previous limitation of m8.

Should we duplicate the test without the undef indices though so we're still testing the negative case

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really see any value in duplicating given the undef tail is the canonical form for a single source deinterleave. If it's not an undef tail, it can be a deinterleave by definition. In real code, there'd probably be an extract subvector following it, not a full width store.

@preames preames merged commit f2ac265 into llvm:main Feb 6, 2025
8 checks passed
@preames preames deleted the pr-riscv-vla-shuffle-prefix-only branch February 6, 2025 15:15
Icohedron pushed a commit to Icohedron/llvm-project that referenced this pull request Feb 11, 2025
If we're lowering a shuffle to a vrgather (or vcompress), and we know
that a prefix of the operation can be done while producing the same
(defined) lanes, do the operation with a narrower LMUL.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants