Skip to content

[RISCV] Decompose single source shuffles (without exact VLEN) #126108

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Feb 11, 2025

Conversation

preames
Copy link
Collaborator

@preames preames commented Feb 6, 2025

This is a continuation of the work started in #125735 to lower selected VLA shuffles in linear m1 components instead of generating O(LMUL^2) or O(LMUL*Log2(LMUL) high LMUL shuffles.

This pattern focuses on shuffles where all the elements being used across the entire destination register group come from a single register in the source register group. Such cases come up fairly frequently via e.g. spread(N), and repeat(N) idioms.

One subtlety to this patch is the handling of the index vector for vrgatherei16.vv. Because the index and source registers can have different EEW, the index vector for the Nth chunk of the destination is not guaranteed to be register aligned. In fact, it is common for e.g. an EEW=64 shuffle to have EEW=16 indices which are four chunks per source register. Given this, we have to pay a cost for extracting these chunks into the low position before performing each shuffle.

I'd initially expressed this as a naive extract sub-vector for each data parallel piece. However, at high LMUL, this quickly caused register pressure problems since we could at worst need 4x the temporary registers for the index. Instead, this patch uses a repeating slidedown chained from previous iterations. This increases critical path by at worst 3 slides (SEW=64 is the worst case), but reduces register pressure to at worst 2x - and only if the original index vector is reused elsewhere. I view this as arguably a bit of a workaround (since our scheduling should have done better with the plain extract variant), but a probably neccessary one.

This is a continuation of the work started in llvm#125735 to lower
selected VLA shuffles in linear m1 components instead of generating
O(LMUL^2) or O(LMUL*Log2(LMUL) high LMUL shuffles.

This pattern focuses on shuffles where all the elements being used
across the entire destination register group come from a single
register in the source register group.  Such cases come up fairly
frequently via e.g. spread(N), and repeat(N) idioms.

One subtlety to this patch is the handling of the index vector
for vrgatherei16.vv.  Because the index and source registers can
have different EEW, the index vector for the Nth chunk of the
destination is not guaranteed to be register aligned.  In fact,
it is common for e.g. an EEW=64 shuffle to have EEW=16 indices
which are four chunks per source register.  Given this, we have
to pay a cost for extracting these chunks into the low position
before performing each shuffle.

I'd initially expressed this as a naive extract sub-vector for each
data parallel piece.  However, at high LMUL, this quickly caused
register pressure problems since we could at worst need 4x the
temporary registers for the index.  Instead, this patch uses a
repeating slidedown chained from previous iterations.  This increases
critical path by at worst 3 slides (SEW=64 is the worst case),
but reduces register pressure to at worst 2x - and only if the
original index vector is reused elsewhere.  I view this as arguably
a bit of a workaround (since our scheduling should have done better
with the plan extract variant), but a probably neccessary one.
@llvmbot
Copy link
Member

llvmbot commented Feb 6, 2025

@llvm/pr-subscribers-backend-risc-v

Author: Philip Reames (preames)

Changes

This is a continuation of the work started in #125735 to lower selected VLA shuffles in linear m1 components instead of generating O(LMUL^2) or O(LMUL*Log2(LMUL) high LMUL shuffles.

This pattern focuses on shuffles where all the elements being used across the entire destination register group come from a single register in the source register group. Such cases come up fairly frequently via e.g. spread(N), and repeat(N) idioms.

One subtlety to this patch is the handling of the index vector for vrgatherei16.vv. Because the index and source registers can have different EEW, the index vector for the Nth chunk of the destination is not guaranteed to be register aligned. In fact, it is common for e.g. an EEW=64 shuffle to have EEW=16 indices which are four chunks per source register. Given this, we have to pay a cost for extracting these chunks into the low position before performing each shuffle.

I'd initially expressed this as a naive extract sub-vector for each data parallel piece. However, at high LMUL, this quickly caused register pressure problems since we could at worst need 4x the temporary registers for the index. Instead, this patch uses a repeating slidedown chained from previous iterations. This increases critical path by at worst 3 slides (SEW=64 is the worst case), but reduces register pressure to at worst 2x - and only if the original index vector is reused elsewhere. I view this as arguably a bit of a workaround (since our scheduling should have done better with the plan extract variant), but a probably neccessary one.


Full diff: https://github.com/llvm/llvm-project/pull/126108.diff

5 Files Affected:

  • (modified) llvm/lib/Target/RISCV/RISCVISelLowering.cpp (+37)
  • (modified) llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-interleave.ll (+12-7)
  • (modified) llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-interleave.ll (+12-7)
  • (modified) llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-shuffles.ll (+45-13)
  • (modified) llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll (+71-34)
diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
index 165c71d8e03f16..30db4f6f13d856 100644
--- a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
@@ -5350,6 +5350,15 @@ static bool isLocalRepeatingShuffle(ArrayRef<int> Mask, int Span) {
   return true;
 }
 
+/// Is this mask only using elements from the first span of the input?
+static bool isLowSourceShuffle(ArrayRef<int> Mask, int Span) {
+  for (auto [I, M] : enumerate(Mask)) {
+    if (M != -1 && M >= Span)
+      return false;
+  }
+  return true;
+}
+
 /// Try to widen element type to get a new mask value for a better permutation
 /// sequence.  This doesn't try to inspect the widened mask for profitability;
 /// we speculate the widened form is equal or better.  This has the effect of
@@ -5766,6 +5775,34 @@ static SDValue lowerVECTOR_SHUFFLE(SDValue Op, SelectionDAG &DAG,
         Gather = DAG.getNode(ISD::INSERT_SUBVECTOR, DL, ContainerVT, Gather,
                              SubVec, SubIdx);
       }
+    } else if (ContainerVT.bitsGT(M1VT) && isLowSourceShuffle(Mask, VLMAX)) {
+      EVT SubIndexVT = M1VT.changeVectorElementType(IndexVT.getScalarType());
+      auto [InnerTrueMask, InnerVL] =
+          getDefaultScalableVLOps(M1VT, DL, DAG, Subtarget);
+      int N = ContainerVT.getVectorMinNumElements() /
+              M1VT.getVectorMinNumElements();
+      assert(isPowerOf2_32(N) && N <= 8);
+      Gather = DAG.getUNDEF(ContainerVT);
+      SDValue SlideAmt =
+          DAG.getElementCount(DL, XLenVT, M1VT.getVectorElementCount());
+      for (int i = 0; i < N; i++) {
+        if (i != 0)
+          LHSIndices = getVSlidedown(DAG, Subtarget, DL, IndexContainerVT,
+                                     DAG.getUNDEF(IndexContainerVT), LHSIndices,
+                                     SlideAmt, TrueMask, VL);
+        SDValue SubIdx =
+            DAG.getVectorIdxConstant(M1VT.getVectorMinNumElements() * i, DL);
+        SDValue SubV1 =
+            DAG.getNode(ISD::EXTRACT_SUBVECTOR, DL, M1VT, V1, SubIdx);
+        SDValue SubIndex =
+            DAG.getNode(ISD::EXTRACT_SUBVECTOR, DL, SubIndexVT, LHSIndices,
+                        DAG.getVectorIdxConstant(0, DL));
+        SDValue SubVec =
+            DAG.getNode(GatherVVOpc, DL, M1VT, SubV1, SubIndex,
+                        DAG.getUNDEF(M1VT), InnerTrueMask, InnerVL);
+        Gather = DAG.getNode(ISD::INSERT_SUBVECTOR, DL, ContainerVT, Gather,
+                             SubVec, SubIdx);
+      }
     } else {
       Gather = DAG.getNode(GatherVVOpc, DL, ContainerVT, V1, LHSIndices,
                            DAG.getUNDEF(ContainerVT), TrueMask, VL);
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-interleave.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-interleave.ll
index 4b09b571b94069..30b2181ece1eb6 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-interleave.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-fp-interleave.ll
@@ -38,15 +38,20 @@ define <4 x float> @interleave_v2f32(<2 x float> %x, <2 x float> %y) {
 define <4 x double> @interleave_v2f64(<2 x double> %x, <2 x double> %y) {
 ; V128-LABEL: interleave_v2f64:
 ; V128:       # %bb.0:
+; V128-NEXT:    csrr a0, vlenb
 ; V128-NEXT:    vsetivli zero, 4, e16, mf2, ta, ma
-; V128-NEXT:    vmv1r.v v12, v9
-; V128-NEXT:    vid.v v9
+; V128-NEXT:    vid.v v10
+; V128-NEXT:    srli a0, a0, 3
+; V128-NEXT:    vsrl.vi v10, v10, 1
+; V128-NEXT:    vslidedown.vx v12, v10, a0
+; V128-NEXT:    vsetvli a0, zero, e64, m1, ta, ma
+; V128-NEXT:    vrgatherei16.vv v13, v11, v12
+; V128-NEXT:    vrgatherei16.vv v12, v9, v10
 ; V128-NEXT:    vmv.v.i v0, 10
-; V128-NEXT:    vsrl.vi v14, v9, 1
-; V128-NEXT:    vsetvli zero, zero, e64, m2, ta, mu
-; V128-NEXT:    vrgatherei16.vv v10, v8, v14
-; V128-NEXT:    vrgatherei16.vv v10, v12, v14, v0.t
-; V128-NEXT:    vmv.v.v v8, v10
+; V128-NEXT:    vrgatherei16.vv v14, v8, v10
+; V128-NEXT:    vmv.v.v v15, v13
+; V128-NEXT:    vsetivli zero, 4, e64, m2, ta, ma
+; V128-NEXT:    vmerge.vvm v8, v14, v12, v0
 ; V128-NEXT:    ret
 ;
 ; RV32-V512-LABEL: interleave_v2f64:
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-interleave.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-interleave.ll
index da7cdf3ba8ec01..ac70e5a3081c4c 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-interleave.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-interleave.ll
@@ -51,15 +51,20 @@ define <4 x i32> @interleave_v2i32(<2 x i32> %x, <2 x i32> %y) {
 define <4 x i64> @interleave_v2i64(<2 x i64> %x, <2 x i64> %y) {
 ; V128-LABEL: interleave_v2i64:
 ; V128:       # %bb.0:
+; V128-NEXT:    csrr a0, vlenb
 ; V128-NEXT:    vsetivli zero, 4, e16, mf2, ta, ma
-; V128-NEXT:    vmv1r.v v12, v9
-; V128-NEXT:    vid.v v9
+; V128-NEXT:    vid.v v10
+; V128-NEXT:    srli a0, a0, 3
+; V128-NEXT:    vsrl.vi v10, v10, 1
+; V128-NEXT:    vslidedown.vx v12, v10, a0
+; V128-NEXT:    vsetvli a0, zero, e64, m1, ta, ma
+; V128-NEXT:    vrgatherei16.vv v13, v11, v12
+; V128-NEXT:    vrgatherei16.vv v12, v9, v10
 ; V128-NEXT:    vmv.v.i v0, 10
-; V128-NEXT:    vsrl.vi v14, v9, 1
-; V128-NEXT:    vsetvli zero, zero, e64, m2, ta, mu
-; V128-NEXT:    vrgatherei16.vv v10, v8, v14
-; V128-NEXT:    vrgatherei16.vv v10, v12, v14, v0.t
-; V128-NEXT:    vmv.v.v v8, v10
+; V128-NEXT:    vrgatherei16.vv v14, v8, v10
+; V128-NEXT:    vmv.v.v v15, v13
+; V128-NEXT:    vsetivli zero, 4, e64, m2, ta, ma
+; V128-NEXT:    vmerge.vvm v8, v14, v12, v0
 ; V128-NEXT:    ret
 ;
 ; RV32-V512-LABEL: interleave_v2i64:
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-shuffles.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-shuffles.ll
index a5039c58fccb1d..a6a79f57a3da6d 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-shuffles.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-shuffles.ll
@@ -817,13 +817,17 @@ define <8 x i32> @shuffle_spread2_singlesrc_e32_index1(<8 x i32> %v) {
 define <8 x i32> @shuffle_spread2_singlesrc_e32_index2(<8 x i32> %v) {
 ; CHECK-LABEL: shuffle_spread2_singlesrc_e32_index2:
 ; CHECK:       # %bb.0:
+; CHECK-NEXT:    csrr a0, vlenb
 ; CHECK-NEXT:    vsetivli zero, 8, e16, m1, ta, ma
 ; CHECK-NEXT:    vid.v v10
+; CHECK-NEXT:    srli a0, a0, 2
 ; CHECK-NEXT:    vsrl.vi v10, v10, 1
 ; CHECK-NEXT:    vadd.vi v12, v10, -1
-; CHECK-NEXT:    vsetvli zero, zero, e32, m2, ta, ma
+; CHECK-NEXT:    vslidedown.vx v10, v12, a0
+; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v11, v9, v10
 ; CHECK-NEXT:    vrgatherei16.vv v10, v8, v12
-; CHECK-NEXT:    vmv.v.v v8, v10
+; CHECK-NEXT:    vmv2r.v v8, v10
 ; CHECK-NEXT:    ret
   %out = shufflevector <8 x i32> %v, <8 x i32> poison, <8 x i32> <i32 undef, i32 undef, i32 0, i32 undef, i32 1, i32 undef, i32 2, i32 undef>
   ret <8 x i32> %out
@@ -836,9 +840,13 @@ define <8 x i32> @shuffle_spread3_singlesrc_e32(<8 x i32> %v) {
 ; CHECK-NEXT:    vmv.v.i v10, 0
 ; CHECK-NEXT:    li a0, 1
 ; CHECK-NEXT:    vslide1down.vx v12, v10, a0
-; CHECK-NEXT:    vsetvli zero, zero, e64, m2, ta, ma
+; CHECK-NEXT:    csrr a0, vlenb
+; CHECK-NEXT:    srli a0, a0, 3
+; CHECK-NEXT:    vslidedown.vx v10, v12, a0
+; CHECK-NEXT:    vsetvli a0, zero, e64, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v11, v9, v10
 ; CHECK-NEXT:    vrgatherei16.vv v10, v8, v12
-; CHECK-NEXT:    vmv.v.v v8, v10
+; CHECK-NEXT:    vmv2r.v v8, v10
 ; CHECK-NEXT:    ret
   %out = shufflevector <8 x i32> %v, <8 x i32> poison, <8 x i32> <i32 0, i32 undef, i32 undef, i32 1, i32 undef, i32 undef, i32 2, i32 undef>
   ret <8 x i32> %out
@@ -848,12 +856,16 @@ define <8 x i32> @shuffle_spread3_singlesrc_e32(<8 x i32> %v) {
 define <8 x i32> @shuffle_spread4_singlesrc_e32(<8 x i32> %v) {
 ; CHECK-LABEL: shuffle_spread4_singlesrc_e32:
 ; CHECK:       # %bb.0:
+; CHECK-NEXT:    csrr a0, vlenb
 ; CHECK-NEXT:    vsetivli zero, 8, e16, m1, ta, ma
 ; CHECK-NEXT:    vid.v v10
+; CHECK-NEXT:    srli a0, a0, 2
 ; CHECK-NEXT:    vsrl.vi v12, v10, 2
-; CHECK-NEXT:    vsetvli zero, zero, e32, m2, ta, ma
+; CHECK-NEXT:    vslidedown.vx v10, v12, a0
+; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v11, v9, v10
 ; CHECK-NEXT:    vrgatherei16.vv v10, v8, v12
-; CHECK-NEXT:    vmv.v.v v8, v10
+; CHECK-NEXT:    vmv2r.v v8, v10
 ; CHECK-NEXT:    ret
   %out = shufflevector <8 x i32> %v, <8 x i32> poison, <8 x i32> <i32 0, i32 undef, i32 undef, i32 undef, i32 1, i32 undef, i32 undef, i32 undef>
   ret <8 x i32> %out
@@ -980,12 +992,16 @@ define <8 x i32> @shuffle_repeat3_singlesrc_e32(<8 x i32> %v) {
 ; CHECK-NEXT:    vmv.v.i v11, 1
 ; CHECK-NEXT:    li a0, 192
 ; CHECK-NEXT:    vmv.s.x v10, a0
+; CHECK-NEXT:    csrr a0, vlenb
 ; CHECK-NEXT:    vmerge.vim v11, v11, 0, v0
 ; CHECK-NEXT:    vmv.v.v v0, v10
 ; CHECK-NEXT:    vmerge.vim v12, v11, 2, v0
-; CHECK-NEXT:    vsetvli zero, zero, e32, m2, ta, ma
+; CHECK-NEXT:    srli a0, a0, 2
+; CHECK-NEXT:    vslidedown.vx v10, v12, a0
+; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v11, v9, v10
 ; CHECK-NEXT:    vrgatherei16.vv v10, v8, v12
-; CHECK-NEXT:    vmv.v.v v8, v10
+; CHECK-NEXT:    vmv2r.v v8, v10
 ; CHECK-NEXT:    ret
   %out = shufflevector <8 x i32> %v, <8 x i32> poison, <8 x i32> <i32 0, i32 0, i32 0, i32 1, i32 1, i32 1, i32 2, i32 2>
   ret <8 x i32> %out
@@ -994,12 +1010,16 @@ define <8 x i32> @shuffle_repeat3_singlesrc_e32(<8 x i32> %v) {
 define <8 x i32> @shuffle_repeat4_singlesrc_e32(<8 x i32> %v) {
 ; CHECK-LABEL: shuffle_repeat4_singlesrc_e32:
 ; CHECK:       # %bb.0:
+; CHECK-NEXT:    csrr a0, vlenb
 ; CHECK-NEXT:    vsetivli zero, 8, e16, m1, ta, ma
 ; CHECK-NEXT:    vid.v v10
+; CHECK-NEXT:    srli a0, a0, 2
 ; CHECK-NEXT:    vsrl.vi v12, v10, 2
-; CHECK-NEXT:    vsetvli zero, zero, e32, m2, ta, ma
+; CHECK-NEXT:    vslidedown.vx v10, v12, a0
+; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v11, v9, v10
 ; CHECK-NEXT:    vrgatherei16.vv v10, v8, v12
-; CHECK-NEXT:    vmv.v.v v8, v10
+; CHECK-NEXT:    vmv2r.v v8, v10
 ; CHECK-NEXT:    ret
   %out = shufflevector <8 x i32> %v, <8 x i32> poison, <8 x i32> <i32 0, i32 0, i32 0, i32 0, i32 1, i32 1, i32 1, i32 1>
   ret <8 x i32> %out
@@ -1291,11 +1311,23 @@ define void @shuffle_i128_splat(ptr %p) nounwind {
 ; CHECK:       # %bb.0:
 ; CHECK-NEXT:    vsetivli zero, 8, e64, m4, ta, ma
 ; CHECK-NEXT:    vle64.v v8, (a0)
-; CHECK-NEXT:    lui a1, 16
+; CHECK-NEXT:    csrr a1, vlenb
+; CHECK-NEXT:    lui a2, 16
+; CHECK-NEXT:    srli a1, a1, 3
 ; CHECK-NEXT:    vsetivli zero, 4, e32, m1, ta, ma
-; CHECK-NEXT:    vmv.v.x v12, a1
-; CHECK-NEXT:    vsetivli zero, 8, e64, m4, ta, ma
+; CHECK-NEXT:    vmv.v.x v12, a2
+; CHECK-NEXT:    vsetivli zero, 8, e16, m1, ta, ma
+; CHECK-NEXT:    vslidedown.vx v13, v12, a1
+; CHECK-NEXT:    vslidedown.vx v14, v13, a1
+; CHECK-NEXT:    vsetvli a2, zero, e64, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v17, v9, v13
 ; CHECK-NEXT:    vrgatherei16.vv v16, v8, v12
+; CHECK-NEXT:    vrgatherei16.vv v18, v10, v14
+; CHECK-NEXT:    vsetivli zero, 8, e16, m1, ta, ma
+; CHECK-NEXT:    vslidedown.vx v8, v14, a1
+; CHECK-NEXT:    vsetvli a1, zero, e64, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v19, v11, v8
+; CHECK-NEXT:    vsetivli zero, 8, e64, m4, ta, ma
 ; CHECK-NEXT:    vse64.v v16, (a0)
 ; CHECK-NEXT:    ret
   %a = load <4 x i128>, ptr %p
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll
index abbbfe8f252fb2..b7b5ca870bd900 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-shuffle-changes-length.ll
@@ -237,10 +237,15 @@ define <8 x i32> @v8i32_v4i32(<4 x i32>) {
 ; CHECK:       # %bb.0:
 ; CHECK-NEXT:    lui a0, %hi(.LCPI5_0)
 ; CHECK-NEXT:    addi a0, a0, %lo(.LCPI5_0)
-; CHECK-NEXT:    vsetivli zero, 8, e32, m2, ta, ma
-; CHECK-NEXT:    vle16.v v12, (a0)
-; CHECK-NEXT:    vrgatherei16.vv v10, v8, v12
-; CHECK-NEXT:    vmv.v.v v8, v10
+; CHECK-NEXT:    vsetivli zero, 8, e16, m1, ta, ma
+; CHECK-NEXT:    vle16.v v9, (a0)
+; CHECK-NEXT:    csrr a0, vlenb
+; CHECK-NEXT:    srli a0, a0, 2
+; CHECK-NEXT:    vslidedown.vx v10, v9, a0
+; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v11, v12, v10
+; CHECK-NEXT:    vrgatherei16.vv v10, v8, v9
+; CHECK-NEXT:    vmv2r.v v8, v10
 ; CHECK-NEXT:    ret
   %2 = shufflevector <4 x i32> %0, <4 x i32> poison, <8 x i32> <i32 2, i32 3, i32 0, i32 1, i32 1, i32 2, i32 0, i32 3>
   ret <8 x i32> %2
@@ -249,30 +254,38 @@ define <8 x i32> @v8i32_v4i32(<4 x i32>) {
 define <16 x i32> @v16i32_v4i32(<4 x i32>) {
 ; CHECK-LABEL: v16i32_v4i32:
 ; CHECK:       # %bb.0:
-; CHECK-NEXT:    lui a0, 2
 ; CHECK-NEXT:    vsetivli zero, 16, e8, m1, ta, ma
-; CHECK-NEXT:    vmv.v.i v9, 3
+; CHECK-NEXT:    vmv1r.v v10, v8
+; CHECK-NEXT:    lui a0, 2
+; CHECK-NEXT:    vmv.v.i v11, 3
 ; CHECK-NEXT:    addi a1, a0, 265
 ; CHECK-NEXT:    vsetvli zero, zero, e16, m2, ta, ma
 ; CHECK-NEXT:    vmv.s.x v0, a1
 ; CHECK-NEXT:    lui a1, 4
 ; CHECK-NEXT:    addi a1, a1, 548
-; CHECK-NEXT:    vsetvli zero, zero, e8, m1, ta, ma
-; CHECK-NEXT:    vmerge.vim v9, v9, 2, v0
-; CHECK-NEXT:    vsetvli zero, zero, e16, m2, ta, ma
-; CHECK-NEXT:    vmv.s.x v0, a1
+; CHECK-NEXT:    vmv.s.x v8, a1
+; CHECK-NEXT:    csrr a1, vlenb
 ; CHECK-NEXT:    addi a0, a0, -1856
+; CHECK-NEXT:    srli a1, a1, 2
+; CHECK-NEXT:    vmv.s.x v9, a0
 ; CHECK-NEXT:    vsetvli zero, zero, e8, m1, ta, ma
-; CHECK-NEXT:    vmerge.vim v9, v9, 0, v0
-; CHECK-NEXT:    vsetvli zero, zero, e16, m2, ta, ma
-; CHECK-NEXT:    vmv.s.x v0, a0
-; CHECK-NEXT:    vsetvli zero, zero, e8, m1, ta, ma
-; CHECK-NEXT:    vmerge.vim v9, v9, 1, v0
+; CHECK-NEXT:    vmerge.vim v11, v11, 2, v0
+; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmerge.vim v8, v11, 0, v0
+; CHECK-NEXT:    vmv1r.v v0, v9
+; CHECK-NEXT:    vmerge.vim v8, v8, 1, v0
 ; CHECK-NEXT:    vsetvli zero, zero, e16, m2, ta, ma
-; CHECK-NEXT:    vsext.vf2 v16, v9
-; CHECK-NEXT:    vsetvli zero, zero, e32, m4, ta, ma
-; CHECK-NEXT:    vrgatherei16.vv v12, v8, v16
-; CHECK-NEXT:    vmv.v.v v8, v12
+; CHECK-NEXT:    vsext.vf2 v14, v8
+; CHECK-NEXT:    vslidedown.vx v16, v14, a1
+; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v9, v12, v16
+; CHECK-NEXT:    vrgatherei16.vv v8, v10, v14
+; CHECK-NEXT:    vsetivli zero, 16, e16, m2, ta, ma
+; CHECK-NEXT:    vslidedown.vx v12, v16, a1
+; CHECK-NEXT:    vslidedown.vx v14, v12, a1
+; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v10, v11, v12
+; CHECK-NEXT:    vrgatherei16.vv v11, v12, v14
 ; CHECK-NEXT:    ret
   %2 = shufflevector <4 x i32> %0, <4 x i32> poison, <16 x i32> <i32 2, i32 3, i32 0, i32 2, i32 3, i32 0, i32 1, i32 1, i32 2, i32 0, i32 3, i32 1, i32 1, i32 2, i32 0, i32 3>
   ret <16 x i32> %2
@@ -281,31 +294,55 @@ define <16 x i32> @v16i32_v4i32(<4 x i32>) {
 define <32 x i32> @v32i32_v4i32(<4 x i32>) {
 ; CHECK-LABEL: v32i32_v4i32:
 ; CHECK:       # %bb.0:
+; CHECK-NEXT:    vsetivli zero, 1, e32, m4, ta, ma
+; CHECK-NEXT:    vmv1r.v v10, v8
 ; CHECK-NEXT:    li a0, 32
 ; CHECK-NEXT:    lui a1, 135432
 ; CHECK-NEXT:    addi a1, a1, 1161
-; CHECK-NEXT:    vsetivli zero, 1, e32, m1, ta, ma
 ; CHECK-NEXT:    vmv.s.x v0, a1
 ; CHECK-NEXT:    lui a1, 270865
 ; CHECK-NEXT:    addi a1, a1, 548
-; CHECK-NEXT:    vmv.s.x v9, a1
+; CHECK-NEXT:    vmv.s.x v8, a1
 ; CHECK-NEXT:    lui a1, 100550
+; CHECK-NEXT:    addi a1, a1, 64
+; CHECK-NEXT:    vmv.s.x v9, a1
+; CHECK-NEXT:    csrr a1, vlenb
 ; CHECK-NEXT:    vsetvli zero, a0, e8, m2, ta, ma
-; CHECK-NEXT:    vmv.v.i v10, 3
-; CHECK-NEXT:    addi a0, a1, 64
-; CHECK-NEXT:    vmerge.vim v18, v10, 2, v0
-; CHECK-NEXT:    vsetvli zero, zero, e32, m8, ta, ma
-; CHECK-NEXT:    vmv.s.x v16, a0
+; CHECK-NEXT:    vmv.v.i v12, 3
+; CHECK-NEXT:    srli a1, a1, 2
+; CHECK-NEXT:    vmerge.vim v12, v12, 2, v0
+; CHECK-NEXT:    vmv1r.v v0, v8
+; CHECK-NEXT:    vmerge.vim v12, v12, 0, v0
 ; CHECK-NEXT:    vmv1r.v v0, v9
-; CHECK-NEXT:    vsetvli zero, zero, e8, m2, ta, ma
-; CHECK-NEXT:    vmerge.vim v18, v18, 0, v0
-; CHECK-NEXT:    vmv1r.v v0, v16
-; CHECK-NEXT:    vmerge.vim v16, v18, 1, v0
+; CHECK-NEXT:    vmerge.vim v8, v12, 1, v0
 ; CHECK-NEXT:    vsetvli zero, zero, e16, m4, ta, ma
-; CHECK-NEXT:    vsext.vf2 v24, v16
-; CHECK-NEXT:    vsetvli zero, zero, e32, m8, ta, ma
-; CHECK-NEXT:    vrgatherei16.vv v16, v8, v24
-; CHECK-NEXT:    vmv.v.v v8, v16
+; CHECK-NEXT:    vsext.vf2 v16, v8
+; CHECK-NEXT:    vslidedown.vx v12, v16, a1
+; CHECK-NEXT:    vsetvli a2, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v9, v11, v12
+; CHECK-NEXT:    vrgatherei16.vv v8, v10, v16
+; CHECK-NEXT:    vsetvli zero, a0, e16, m4, ta, ma
+; CHECK-NEXT:    vslidedown.vx v12, v12, a1
+; CHECK-NEXT:    vsetvli a2, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v10, v11, v12
+; CHECK-NEXT:    vsetvli zero, a0, e16, m4, ta, ma
+; CHECK-NEXT:    vslidedown.vx v12, v12, a1
+; CHECK-NEXT:    vsetvli a2, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v11, v16, v12
+; CHECK-NEXT:    vsetvli zero, a0, e16, m4, ta, ma
+; CHECK-NEXT:    vslidedown.vx v20, v12, a1
+; CHECK-NEXT:    vsetvli a2, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v12, v17, v20
+; CHECK-NEXT:    vsetvli zero, a0, e16, m4, ta, ma
+; CHECK-NEXT:    vslidedown.vx v20, v20, a1
+; CHECK-NEXT:    vsetvli a2, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v13, v18, v20
+; CHECK-NEXT:    vsetvli zero, a0, e16, m4, ta, ma
+; CHECK-NEXT:    vslidedown.vx v20, v20, a1
+; CHECK-NEXT:    vslidedown.vx v24, v20, a1
+; CHECK-NEXT:    vsetvli a0, zero, e32, m1, ta, ma
+; CHECK-NEXT:    vrgatherei16.vv v14, v19, v20
+; CHECK-NEXT:    vrgatherei16.vv v15, v16, v24
 ; CHECK-NEXT:    ret
   %2 = shufflevector <4 x i32> %0, <4 x i32> poison, <32 x i32> <i32 2, i32 3, i32 0, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3, i32 0, i32 2, i32 3, i32 0, i32 1, i32 1, i32 2, i32 0, i32 3, i32 1, i32 1, i32 2, i32 0, i32 3, i32 1, i32 2, i32 0, i32 3, i32 1, i32 1, i32 2, i32 0, i32 3>
   ret <32 x i32> %2

; CHECK-NEXT: vsetivli zero, 8, e64, m4, ta, ma
; CHECK-NEXT: vmv.v.x v12, a2
; CHECK-NEXT: vsetivli zero, 8, e16, m1, ta, ma
; CHECK-NEXT: vslidedown.vx v13, v12, a1
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to note here, we can do better for the splat cases. Since these share the same indices for all dest components, we can reuse a single source. However, we can also go further and just do a single m1 shuffle followed by a series of whole register moves. I'm planning a change for this down the road.

@topperc
Copy link
Collaborator

topperc commented Feb 11, 2025

"with the plan extract variant" - I think "plan" was supposed to be "plain" here?

Copy link

⚠️ C/C++ code formatter, clang-format found issues in your code. ⚠️

You can test this locally with the following command:
git-clang-format --diff e225677b1f6fe9f8e928836276f1d43b0591e9de c359d56faa58de6c0a4bae9a58e3ec9284411d73 --extensions cpp -- llvm/lib/Target/RISCV/RISCVISelLowering.cpp
View the diff from clang-format here.
diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
index 4f0904ed0d..c3c5416938 100644
--- a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
@@ -5352,8 +5352,7 @@ static bool isLocalRepeatingShuffle(ArrayRef<int> Mask, int Span) {
 
 /// Is this mask only using elements from the first span of the input?
 static bool isLowSourceShuffle(ArrayRef<int> Mask, int Span) {
-  return all_of(Mask,
-                [&](const auto &Idx) { return Idx == -1 || Idx < Span; });
+  return all_of(Mask, [&](const auto &Idx) { return Idx == -1 || Idx < Span; });
 }
 
 /// Try to widen element type to get a new mask value for a better permutation

Copy link
Collaborator

@topperc topperc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@preames preames merged commit 8374d42 into llvm:main Feb 11, 2025
4 of 7 checks passed
@asb
Copy link
Contributor

asb commented Feb 12, 2025

This PR appears to have caused a miscompile on the RVA 23 EVL tail folding buildbot (and very possibly for other RISC-V vector configs - I haven't had a chance to explore). The miscompile is exposed through incorrect output from the stage2 compiler. However I've managed to reproduce locally with a single-stage build of Clang, then cross-compiling the llvm-test-suite. 20050826-2.c from the GCC torture suite is failing but passes if you revert 8374d42 (the commit landed from this PR).

I'm currently double checking that the commit cleanly reverts (i.e. there are no later test changes that would be invalidated by the revert).

@preames
Copy link
Collaborator Author

preames commented Feb 12, 2025

I'm currently double checking that the commit cleanly reverts (i.e. there are no later test changes that would be invalidated by the revert).

I agree that the right thing to do here to revert while we're investigating. I'm happy to handle that if you'd rather. Can you file an issue for the failure, ideally with a pointer to how to reproduce the torture suite failure?

@asb
Copy link
Contributor

asb commented Feb 12, 2025

I'm currently double checking that the commit cleanly reverts (i.e. there are no later test changes that would be invalidated by the revert).

I agree that the right thing to do here to revert while we're investigating. I'm happy to handle that if you'd rather. Can you file an issue for the failure, ideally with a pointer to how to reproduce the torture suite failure?

I'll likely have to step away shortly so if you can look at the revert that would be great - thank you. I'll file that bug with collated information on how to reproduce.

preames added a commit that referenced this pull request Feb 12, 2025
…#126108)"

This reverts commit 8374d42.  A miscompile
was reported against the review thread, reverting while we investigate.
@preames
Copy link
Collaborator Author

preames commented Feb 12, 2025

I'll likely have to step away shortly so if you can look at the revert that would be great - thank you.

Reverted - tests were fine, but had to manually resolve a conflict with a later patch.

I'll file that bug with collated information on how to reproduce.

Look forward to it. I took a skim through the code and don't see an obvious problem, so will need a reproducer to identity the mistake.

@preames
Copy link
Collaborator Author

preames commented Feb 12, 2025

I'll file that bug with collated information on how to reproduce.

Look forward to it. I took a skim through the code and don't see an obvious problem, so will need a reproducer to identity the mistake.

Correction, I think I see the bug. Update to follow once confirmed.

@asb
Copy link
Contributor

asb commented Feb 12, 2025

Bug posted #126943 - I have to step away from the computer right now so can't do the obvious steps of reducing further. Thanks for the help with the quick revert!

@preames
Copy link
Collaborator Author

preames commented Feb 12, 2025

Well, the good news, I definitely found the bug. I'm more than a bit embarrassed by this one, the code just doesn't do what the comments or the review said it did. The code was using each source register in the source group, not the low register as it should have been. This results in a completely wrong result, including all of the tests changed in the review. I'm not sure how I screwed this up - I know I had a working version before splitting and rebasing - but yeah, I definitely screwed that up.

I'm going to reopen the review as this seems beyond a reasonable fix and reland.

@preames
Copy link
Collaborator Author

preames commented Feb 12, 2025

A corrected version of this patch is posted as: #126951

flovent pushed a commit to flovent/llvm-project that referenced this pull request Feb 13, 2025
…26108)

This is a continuation of the work started in llvm#125735 to lower selected
VLA shuffles in linear m1 components instead of generating O(LMUL^2) or
O(LMUL*Log2(LMUL) high LMUL shuffles.

This pattern focuses on shuffles where all the elements being used
across the entire destination register group come from a single register
in the source register group. Such cases come up fairly frequently via
e.g. spread(N), and repeat(N) idioms.

One subtlety to this patch is the handling of the index vector for
vrgatherei16.vv. Because the index and source registers can have
different EEW, the index vector for the Nth chunk of the destination is
not guaranteed to be register aligned. In fact, it is common for e.g. an
EEW=64 shuffle to have EEW=16 indices which are four chunks per source
register. Given this, we have to pay a cost for extracting these chunks
into the low position before performing each shuffle.

I'd initially expressed this as a naive extract sub-vector for each data
parallel piece. However, at high LMUL, this quickly caused register
pressure problems since we could at worst need 4x the temporary
registers for the index. Instead, this patch uses a repeating slidedown
chained from previous iterations. This increases critical path by at
worst 3 slides (SEW=64 is the worst case), but reduces register pressure
to at worst 2x - and only if the original index vector is reused
elsewhere. I view this as arguably a bit of a workaround (since our
scheduling should have done better with the plain extract variant), but
a probably neccessary one.
flovent pushed a commit to flovent/llvm-project that referenced this pull request Feb 13, 2025
…llvm#126108)"

This reverts commit 8374d42.  A miscompile
was reported against the review thread, reverting while we investigate.
joaosaffran pushed a commit to joaosaffran/llvm-project that referenced this pull request Feb 14, 2025
…26108)

This is a continuation of the work started in llvm#125735 to lower selected
VLA shuffles in linear m1 components instead of generating O(LMUL^2) or
O(LMUL*Log2(LMUL) high LMUL shuffles.

This pattern focuses on shuffles where all the elements being used
across the entire destination register group come from a single register
in the source register group. Such cases come up fairly frequently via
e.g. spread(N), and repeat(N) idioms.

One subtlety to this patch is the handling of the index vector for
vrgatherei16.vv. Because the index and source registers can have
different EEW, the index vector for the Nth chunk of the destination is
not guaranteed to be register aligned. In fact, it is common for e.g. an
EEW=64 shuffle to have EEW=16 indices which are four chunks per source
register. Given this, we have to pay a cost for extracting these chunks
into the low position before performing each shuffle.

I'd initially expressed this as a naive extract sub-vector for each data
parallel piece. However, at high LMUL, this quickly caused register
pressure problems since we could at worst need 4x the temporary
registers for the index. Instead, this patch uses a repeating slidedown
chained from previous iterations. This increases critical path by at
worst 3 slides (SEW=64 is the worst case), but reduces register pressure
to at worst 2x - and only if the original index vector is reused
elsewhere. I view this as arguably a bit of a workaround (since our
scheduling should have done better with the plain extract variant), but
a probably neccessary one.
joaosaffran pushed a commit to joaosaffran/llvm-project that referenced this pull request Feb 14, 2025
…llvm#126108)"

This reverts commit 8374d42.  A miscompile
was reported against the review thread, reverting while we investigate.
sivan-shani pushed a commit to sivan-shani/llvm-project that referenced this pull request Feb 24, 2025
…26108)

This is a continuation of the work started in llvm#125735 to lower selected
VLA shuffles in linear m1 components instead of generating O(LMUL^2) or
O(LMUL*Log2(LMUL) high LMUL shuffles.

This pattern focuses on shuffles where all the elements being used
across the entire destination register group come from a single register
in the source register group. Such cases come up fairly frequently via
e.g. spread(N), and repeat(N) idioms.

One subtlety to this patch is the handling of the index vector for
vrgatherei16.vv. Because the index and source registers can have
different EEW, the index vector for the Nth chunk of the destination is
not guaranteed to be register aligned. In fact, it is common for e.g. an
EEW=64 shuffle to have EEW=16 indices which are four chunks per source
register. Given this, we have to pay a cost for extracting these chunks
into the low position before performing each shuffle.

I'd initially expressed this as a naive extract sub-vector for each data
parallel piece. However, at high LMUL, this quickly caused register
pressure problems since we could at worst need 4x the temporary
registers for the index. Instead, this patch uses a repeating slidedown
chained from previous iterations. This increases critical path by at
worst 3 slides (SEW=64 is the worst case), but reduces register pressure
to at worst 2x - and only if the original index vector is reused
elsewhere. I view this as arguably a bit of a workaround (since our
scheduling should have done better with the plain extract variant), but
a probably neccessary one.
sivan-shani pushed a commit to sivan-shani/llvm-project that referenced this pull request Feb 24, 2025
…llvm#126108)"

This reverts commit 8374d42.  A miscompile
was reported against the review thread, reverting while we investigate.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants