-
Notifications
You must be signed in to change notification settings - Fork 14.3k
[RISCV] Pack build_vectors into largest available element type #97351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Our worst case build_vector lowering is a serial chain of vslide1down.vx operations which creates a serial dependency chain through a relatively high latency operation. We can instead pack together elements into ELEN sized chunks, and move them from integer to scalar in a single operation. This reduces the length of the serial chain on the vector side, and costs at most three scalar instructions per element. This is a win for all cores when the sum of the latencies of the scalar instructions is less than the vslide1down.vx being replaced, and is particularly profitable for out-of-order cores which can overlap the scalar computation. This patch is restricted to configurations with zba and zbb. Without both, the zero extend might require two instructions which would bring the total scalar instructions per element to 4. zba and zba are both present in the rva22u64 baseline which is looking to be quite common for hardware in practice; we could extend this to systems without bitmanip with a bit of extra effort.
@llvm/pr-subscribers-backend-risc-v Author: Philip Reames (preames) ChangesOur worst case build_vector lowering is a serial chain of vslide1down.vx operations which creates a serial dependency chain through a relatively high latency operation. We can instead pack together elements into ELEN sized chunks, and move them from integer to scalar in a single operation. This reduces the length of the serial chain on the vector side, and costs at most three scalar instructions per element. This is a win for all cores when the sum of the latencies of the scalar instructions is less than the vslide1down.vx being replaced, and is particularly profitable for out-of-order cores which can overlap the scalar computation. This patch is restricted to configurations with zba and zbb. Without both, the zero extend might require two instructions which would bring the total scalar instructions per element to 4. zba and zba are both present in the rva22u64 baseline which is looking to be quite common for hardware in practice; we could extend this to systems without bitmanip with a bit of extra effort. Patch is 27.76 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/97351.diff 2 Files Affected:
diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
index 5e94fbec5a04a..3a431e8aa8c0e 100644
--- a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
@@ -3896,6 +3896,66 @@ static SDValue lowerBuildVectorOfConstants(SDValue Op, SelectionDAG &DAG,
return SDValue();
}
+/// Double the element size of the build vector to reduce the number
+/// of vslide1down in the build vector chain. In the worst case, this
+/// trades three scalar operations for 1 vector operation. Scalar
+/// operations are generally lower latency, and for out-of-order cores
+/// we also benefit from additional parallelism.
+static SDValue lowerBuildVectorViaPacking(SDValue Op, SelectionDAG &DAG,
+ const RISCVSubtarget &Subtarget) {
+ SDLoc DL(Op);
+ MVT VT = Op.getSimpleValueType();
+ assert(VT.isFixedLengthVector() && "Unexpected vector!");
+ MVT ElemVT = VT.getVectorElementType();
+ if (!ElemVT.isInteger())
+ return SDValue();
+
+ // TODO: Relax these architectural restrictions, possibly with costing
+ // of the actual instructions required.
+ if (!Subtarget.hasStdExtZbb() || !Subtarget.hasStdExtZba())
+ return SDValue();
+
+ unsigned NumElts = VT.getVectorNumElements();
+ unsigned ElemSizeInBits = ElemVT.getSizeInBits();
+ if (ElemSizeInBits >= Subtarget.getELen() || NumElts % 2 != 0)
+ return SDValue();
+
+ // Produce [B,A] packed into a type twice as wide. Note that all
+ // scalars are XLenVT, possibly masked (see below).
+ MVT XLenVT = Subtarget.getXLenVT();
+ auto pack = [&](SDValue A, SDValue B) {
+ // Bias the scheduling of the inserted operations to near the
+ // definition of the element - this tends to reduce register
+ // pressure overall.
+ SDLoc ElemDL(B);
+ SDValue ShtAmt = DAG.getConstant(ElemSizeInBits, ElemDL, XLenVT);
+ return DAG.getNode(ISD::OR, ElemDL, XLenVT, A,
+ DAG.getNode(ISD::SHL, ElemDL, XLenVT, B, ShtAmt));
+ };
+
+ SDValue Mask = DAG.getConstant(
+ APInt::getLowBitsSet(XLenVT.getSizeInBits(), ElemSizeInBits), DL, XLenVT);
+ SmallVector<SDValue> NewOperands;
+ NewOperands.reserve(NumElts / 2);
+ for (unsigned i = 0; i < VT.getVectorNumElements(); i += 2) {
+ SDValue A = Op.getOperand(i);
+ SDValue B = Op.getOperand(i + 1);
+ if (ElemVT != XLenVT) {
+ // Bias the scheduling of the inserted operations to near the
+ // definition of the element - this tends to reduce register
+ // pressure overall.
+ A = DAG.getNode(ISD::AND, SDLoc(A), XLenVT, A, Mask);
+ B = DAG.getNode(ISD::AND, SDLoc(B), XLenVT, B, Mask);
+ }
+ NewOperands.push_back(pack(A, B));
+ }
+ assert(NumElts == NewOperands.size() * 2);
+ MVT WideVT = MVT::getIntegerVT(ElemSizeInBits * 2);
+ MVT WideVecVT = MVT::getVectorVT(WideVT, NumElts / 2);
+ return DAG.getNode(ISD::BITCAST, DL, VT,
+ DAG.getBuildVector(WideVecVT, DL, NewOperands));
+}
+
static SDValue lowerBUILD_VECTOR(SDValue Op, SelectionDAG &DAG,
const RISCVSubtarget &Subtarget) {
MVT VT = Op.getSimpleValueType();
@@ -3981,6 +4041,13 @@ static SDValue lowerBUILD_VECTOR(SDValue Op, SelectionDAG &DAG,
return convertFromScalableVector(VT, Vec, DAG, Subtarget);
}
+ // If we're about to resort to vslide1down (or stack usage), pack our
+ // elements into the widest scalar type we can. This will force a VL/VTYPE
+ // toggle, but reduces the critical path, the number of vslide1down ops
+ // required, and possibly enables scalar folds of the values.
+ if (SDValue Res = lowerBuildVectorViaPacking(Op, DAG, Subtarget))
+ return Res;
+
// For m1 vectors, if we have non-undef values in both halves of our vector,
// split the vector into low and high halves, build them separately, then
// use a vselect to combine them. For long vectors, this cuts the critical
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-buildvec.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-buildvec.ll
index 6cd69bac46e3c..94f9f480d0ba0 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-buildvec.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int-buildvec.ll
@@ -1267,43 +1267,53 @@ define <16 x i8> @buildvec_v16i8_loads_contigous(ptr %p) {
;
; RVA22U64-LABEL: buildvec_v16i8_loads_contigous:
; RVA22U64: # %bb.0:
-; RVA22U64-NEXT: addi a6, a0, 8
-; RVA22U64-NEXT: lbu t6, 1(a0)
+; RVA22U64-NEXT: lbu a1, 1(a0)
+; RVA22U64-NEXT: lbu a2, 0(a0)
; RVA22U64-NEXT: lbu a3, 2(a0)
; RVA22U64-NEXT: lbu a4, 3(a0)
-; RVA22U64-NEXT: lbu a5, 4(a0)
-; RVA22U64-NEXT: lbu t5, 5(a0)
-; RVA22U64-NEXT: lbu a7, 6(a0)
-; RVA22U64-NEXT: lbu t0, 7(a0)
-; RVA22U64-NEXT: lbu t1, 9(a0)
-; RVA22U64-NEXT: lbu t2, 10(a0)
-; RVA22U64-NEXT: lbu t3, 11(a0)
-; RVA22U64-NEXT: lbu t4, 12(a0)
-; RVA22U64-NEXT: vsetivli zero, 16, e8, m1, ta, ma
-; RVA22U64-NEXT: vlse8.v v8, (a0), zero
-; RVA22U64-NEXT: lbu a1, 13(a0)
-; RVA22U64-NEXT: lbu a2, 14(a0)
+; RVA22U64-NEXT: slli a1, a1, 8
+; RVA22U64-NEXT: or a1, a1, a2
+; RVA22U64-NEXT: slli a3, a3, 16
+; RVA22U64-NEXT: slli a4, a4, 24
+; RVA22U64-NEXT: or a3, a3, a4
+; RVA22U64-NEXT: lbu a2, 4(a0)
+; RVA22U64-NEXT: or a1, a1, a3
+; RVA22U64-NEXT: lbu a3, 5(a0)
+; RVA22U64-NEXT: lbu a4, 6(a0)
+; RVA22U64-NEXT: slli a2, a2, 32
+; RVA22U64-NEXT: lbu a5, 7(a0)
+; RVA22U64-NEXT: slli a3, a3, 40
+; RVA22U64-NEXT: or a2, a2, a3
+; RVA22U64-NEXT: slli a4, a4, 48
+; RVA22U64-NEXT: slli a5, a5, 56
+; RVA22U64-NEXT: or a4, a4, a5
+; RVA22U64-NEXT: or a2, a2, a4
+; RVA22U64-NEXT: or a1, a1, a2
+; RVA22U64-NEXT: lbu a2, 9(a0)
+; RVA22U64-NEXT: lbu a3, 8(a0)
+; RVA22U64-NEXT: lbu a4, 10(a0)
+; RVA22U64-NEXT: lbu a5, 11(a0)
+; RVA22U64-NEXT: slli a2, a2, 8
+; RVA22U64-NEXT: or a2, a2, a3
+; RVA22U64-NEXT: slli a4, a4, 16
+; RVA22U64-NEXT: slli a5, a5, 24
+; RVA22U64-NEXT: or a4, a4, a5
+; RVA22U64-NEXT: lbu a3, 12(a0)
+; RVA22U64-NEXT: or a2, a2, a4
+; RVA22U64-NEXT: lbu a4, 13(a0)
+; RVA22U64-NEXT: lbu a5, 14(a0)
+; RVA22U64-NEXT: slli a3, a3, 32
; RVA22U64-NEXT: lbu a0, 15(a0)
-; RVA22U64-NEXT: vslide1down.vx v8, v8, t6
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a3
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a4
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a5
-; RVA22U64-NEXT: vlse8.v v9, (a6), zero
-; RVA22U64-NEXT: vslide1down.vx v8, v8, t5
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a7
-; RVA22U64-NEXT: vslide1down.vx v10, v8, t0
-; RVA22U64-NEXT: vslide1down.vx v8, v9, t1
-; RVA22U64-NEXT: vslide1down.vx v8, v8, t2
-; RVA22U64-NEXT: vslide1down.vx v8, v8, t3
-; RVA22U64-NEXT: vslide1down.vx v8, v8, t4
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a1
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a2
-; RVA22U64-NEXT: li a1, 255
-; RVA22U64-NEXT: vsetvli zero, zero, e16, m2, ta, ma
-; RVA22U64-NEXT: vmv.s.x v0, a1
-; RVA22U64-NEXT: vsetvli zero, zero, e8, m1, ta, mu
+; RVA22U64-NEXT: slli a4, a4, 40
+; RVA22U64-NEXT: or a3, a3, a4
+; RVA22U64-NEXT: slli a5, a5, 48
+; RVA22U64-NEXT: slli a0, a0, 56
+; RVA22U64-NEXT: or a0, a0, a5
+; RVA22U64-NEXT: or a0, a0, a3
+; RVA22U64-NEXT: or a0, a0, a2
+; RVA22U64-NEXT: vsetivli zero, 2, e64, m1, ta, ma
+; RVA22U64-NEXT: vmv.v.x v8, a1
; RVA22U64-NEXT: vslide1down.vx v8, v8, a0
-; RVA22U64-NEXT: vslidedown.vi v8, v10, 8, v0.t
; RVA22U64-NEXT: ret
;
; RV64ZVE32-LABEL: buildvec_v16i8_loads_contigous:
@@ -1484,43 +1494,53 @@ define <16 x i8> @buildvec_v16i8_loads_gather(ptr %p) {
;
; RVA22U64-LABEL: buildvec_v16i8_loads_gather:
; RVA22U64: # %bb.0:
-; RVA22U64-NEXT: addi a6, a0, 82
-; RVA22U64-NEXT: lbu t6, 1(a0)
+; RVA22U64-NEXT: lbu a1, 1(a0)
+; RVA22U64-NEXT: lbu a2, 0(a0)
; RVA22U64-NEXT: lbu a3, 22(a0)
; RVA22U64-NEXT: lbu a4, 31(a0)
-; RVA22U64-NEXT: lbu a5, 44(a0)
-; RVA22U64-NEXT: lbu t5, 55(a0)
-; RVA22U64-NEXT: lbu a7, 623(a0)
-; RVA22U64-NEXT: lbu t0, 75(a0)
-; RVA22U64-NEXT: lbu t1, 93(a0)
-; RVA22U64-NEXT: lbu t2, 105(a0)
-; RVA22U64-NEXT: lbu t3, 161(a0)
-; RVA22U64-NEXT: lbu t4, 124(a0)
-; RVA22U64-NEXT: vsetivli zero, 16, e8, m1, ta, ma
-; RVA22U64-NEXT: vlse8.v v8, (a0), zero
-; RVA22U64-NEXT: lbu a1, 163(a0)
-; RVA22U64-NEXT: lbu a2, 144(a0)
+; RVA22U64-NEXT: slli a1, a1, 8
+; RVA22U64-NEXT: or a1, a1, a2
+; RVA22U64-NEXT: slli a3, a3, 16
+; RVA22U64-NEXT: slli a4, a4, 24
+; RVA22U64-NEXT: or a3, a3, a4
+; RVA22U64-NEXT: lbu a2, 44(a0)
+; RVA22U64-NEXT: or a1, a1, a3
+; RVA22U64-NEXT: lbu a3, 55(a0)
+; RVA22U64-NEXT: lbu a4, 623(a0)
+; RVA22U64-NEXT: slli a2, a2, 32
+; RVA22U64-NEXT: lbu a5, 75(a0)
+; RVA22U64-NEXT: slli a3, a3, 40
+; RVA22U64-NEXT: or a2, a2, a3
+; RVA22U64-NEXT: slli a4, a4, 48
+; RVA22U64-NEXT: slli a5, a5, 56
+; RVA22U64-NEXT: or a4, a4, a5
+; RVA22U64-NEXT: or a2, a2, a4
+; RVA22U64-NEXT: or a1, a1, a2
+; RVA22U64-NEXT: lbu a2, 93(a0)
+; RVA22U64-NEXT: lbu a3, 82(a0)
+; RVA22U64-NEXT: lbu a4, 105(a0)
+; RVA22U64-NEXT: lbu a5, 161(a0)
+; RVA22U64-NEXT: slli a2, a2, 8
+; RVA22U64-NEXT: or a2, a2, a3
+; RVA22U64-NEXT: slli a4, a4, 16
+; RVA22U64-NEXT: slli a5, a5, 24
+; RVA22U64-NEXT: or a4, a4, a5
+; RVA22U64-NEXT: lbu a3, 124(a0)
+; RVA22U64-NEXT: or a2, a2, a4
+; RVA22U64-NEXT: lbu a4, 163(a0)
+; RVA22U64-NEXT: lbu a5, 144(a0)
+; RVA22U64-NEXT: slli a3, a3, 32
; RVA22U64-NEXT: lbu a0, 154(a0)
-; RVA22U64-NEXT: vslide1down.vx v8, v8, t6
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a3
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a4
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a5
-; RVA22U64-NEXT: vlse8.v v9, (a6), zero
-; RVA22U64-NEXT: vslide1down.vx v8, v8, t5
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a7
-; RVA22U64-NEXT: vslide1down.vx v10, v8, t0
-; RVA22U64-NEXT: vslide1down.vx v8, v9, t1
-; RVA22U64-NEXT: vslide1down.vx v8, v8, t2
-; RVA22U64-NEXT: vslide1down.vx v8, v8, t3
-; RVA22U64-NEXT: vslide1down.vx v8, v8, t4
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a1
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a2
-; RVA22U64-NEXT: li a1, 255
-; RVA22U64-NEXT: vsetvli zero, zero, e16, m2, ta, ma
-; RVA22U64-NEXT: vmv.s.x v0, a1
-; RVA22U64-NEXT: vsetvli zero, zero, e8, m1, ta, mu
+; RVA22U64-NEXT: slli a4, a4, 40
+; RVA22U64-NEXT: or a3, a3, a4
+; RVA22U64-NEXT: slli a5, a5, 48
+; RVA22U64-NEXT: slli a0, a0, 56
+; RVA22U64-NEXT: or a0, a0, a5
+; RVA22U64-NEXT: or a0, a0, a3
+; RVA22U64-NEXT: or a0, a0, a2
+; RVA22U64-NEXT: vsetivli zero, 2, e64, m1, ta, ma
+; RVA22U64-NEXT: vmv.v.x v8, a1
; RVA22U64-NEXT: vslide1down.vx v8, v8, a0
-; RVA22U64-NEXT: vslidedown.vi v8, v10, 8, v0.t
; RVA22U64-NEXT: ret
;
; RV64ZVE32-LABEL: buildvec_v16i8_loads_gather:
@@ -1660,22 +1680,30 @@ define <16 x i8> @buildvec_v16i8_undef_low_half(ptr %p) {
;
; RVA22U64-LABEL: buildvec_v16i8_undef_low_half:
; RVA22U64: # %bb.0:
-; RVA22U64-NEXT: addi a1, a0, 82
-; RVA22U64-NEXT: lbu a6, 93(a0)
+; RVA22U64-NEXT: lbu a1, 93(a0)
+; RVA22U64-NEXT: lbu a2, 82(a0)
; RVA22U64-NEXT: lbu a3, 105(a0)
; RVA22U64-NEXT: lbu a4, 161(a0)
-; RVA22U64-NEXT: lbu a5, 124(a0)
-; RVA22U64-NEXT: vsetivli zero, 16, e8, m1, ta, ma
-; RVA22U64-NEXT: vlse8.v v8, (a1), zero
-; RVA22U64-NEXT: lbu a1, 163(a0)
-; RVA22U64-NEXT: lbu a2, 144(a0)
+; RVA22U64-NEXT: slli a1, a1, 8
+; RVA22U64-NEXT: or a1, a1, a2
+; RVA22U64-NEXT: slli a3, a3, 16
+; RVA22U64-NEXT: slli a4, a4, 24
+; RVA22U64-NEXT: or a3, a3, a4
+; RVA22U64-NEXT: lbu a2, 124(a0)
+; RVA22U64-NEXT: or a1, a1, a3
+; RVA22U64-NEXT: lbu a3, 163(a0)
+; RVA22U64-NEXT: lbu a4, 144(a0)
+; RVA22U64-NEXT: slli a2, a2, 32
; RVA22U64-NEXT: lbu a0, 154(a0)
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a6
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a3
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a4
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a5
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a1
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a2
+; RVA22U64-NEXT: slli a3, a3, 40
+; RVA22U64-NEXT: or a2, a2, a3
+; RVA22U64-NEXT: slli a4, a4, 48
+; RVA22U64-NEXT: slli a0, a0, 56
+; RVA22U64-NEXT: or a0, a0, a4
+; RVA22U64-NEXT: or a0, a0, a2
+; RVA22U64-NEXT: or a0, a0, a1
+; RVA22U64-NEXT: vsetivli zero, 2, e64, m1, ta, ma
+; RVA22U64-NEXT: vmv.v.i v8, 0
; RVA22U64-NEXT: vslide1down.vx v8, v8, a0
; RVA22U64-NEXT: ret
;
@@ -1773,23 +1801,31 @@ define <16 x i8> @buildvec_v16i8_undef_high_half(ptr %p) {
;
; RVA22U64-LABEL: buildvec_v16i8_undef_high_half:
; RVA22U64: # %bb.0:
-; RVA22U64-NEXT: lbu a6, 1(a0)
-; RVA22U64-NEXT: lbu a2, 22(a0)
-; RVA22U64-NEXT: lbu a3, 31(a0)
-; RVA22U64-NEXT: lbu a4, 44(a0)
-; RVA22U64-NEXT: vsetivli zero, 16, e8, m1, ta, ma
-; RVA22U64-NEXT: vlse8.v v8, (a0), zero
-; RVA22U64-NEXT: lbu a5, 55(a0)
-; RVA22U64-NEXT: lbu a1, 623(a0)
+; RVA22U64-NEXT: lbu a1, 1(a0)
+; RVA22U64-NEXT: lbu a2, 0(a0)
+; RVA22U64-NEXT: lbu a3, 22(a0)
+; RVA22U64-NEXT: lbu a4, 31(a0)
+; RVA22U64-NEXT: slli a1, a1, 8
+; RVA22U64-NEXT: or a1, a1, a2
+; RVA22U64-NEXT: slli a3, a3, 16
+; RVA22U64-NEXT: slli a4, a4, 24
+; RVA22U64-NEXT: or a3, a3, a4
+; RVA22U64-NEXT: lbu a2, 44(a0)
+; RVA22U64-NEXT: or a1, a1, a3
+; RVA22U64-NEXT: lbu a3, 55(a0)
+; RVA22U64-NEXT: lbu a4, 623(a0)
+; RVA22U64-NEXT: slli a2, a2, 32
; RVA22U64-NEXT: lbu a0, 75(a0)
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a6
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a2
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a3
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a4
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a5
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a1
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a0
-; RVA22U64-NEXT: vslidedown.vi v8, v8, 8
+; RVA22U64-NEXT: slli a3, a3, 40
+; RVA22U64-NEXT: or a2, a2, a3
+; RVA22U64-NEXT: slli a4, a4, 48
+; RVA22U64-NEXT: slli a0, a0, 56
+; RVA22U64-NEXT: or a0, a0, a4
+; RVA22U64-NEXT: or a0, a0, a2
+; RVA22U64-NEXT: or a0, a0, a1
+; RVA22U64-NEXT: vsetivli zero, 2, e64, m1, ta, ma
+; RVA22U64-NEXT: vmv.v.x v8, a0
+; RVA22U64-NEXT: vslide1down.vx v8, v8, zero
; RVA22U64-NEXT: ret
;
; RV64ZVE32-LABEL: buildvec_v16i8_undef_high_half:
@@ -1901,31 +1937,33 @@ define <16 x i8> @buildvec_v16i8_undef_edges(ptr %p) {
;
; RVA22U64-LABEL: buildvec_v16i8_undef_edges:
; RVA22U64: # %bb.0:
-; RVA22U64-NEXT: addi a1, a0, 31
-; RVA22U64-NEXT: addi a6, a0, 82
-; RVA22U64-NEXT: lbu a3, 44(a0)
-; RVA22U64-NEXT: lbu a4, 55(a0)
-; RVA22U64-NEXT: lbu a5, 623(a0)
-; RVA22U64-NEXT: lbu a7, 75(a0)
-; RVA22U64-NEXT: vsetivli zero, 16, e8, m1, ta, ma
-; RVA22U64-NEXT: vlse8.v v8, (a1), zero
-; RVA22U64-NEXT: lbu a1, 93(a0)
-; RVA22U64-NEXT: lbu a2, 105(a0)
+; RVA22U64-NEXT: lbu a1, 44(a0)
+; RVA22U64-NEXT: lbu a2, 55(a0)
+; RVA22U64-NEXT: lbu a3, 31(a0)
+; RVA22U64-NEXT: lbu a4, 623(a0)
+; RVA22U64-NEXT: slli a1, a1, 32
+; RVA22U64-NEXT: slli a2, a2, 40
+; RVA22U64-NEXT: lbu a5, 75(a0)
+; RVA22U64-NEXT: or a1, a1, a2
+; RVA22U64-NEXT: slli a3, a3, 24
+; RVA22U64-NEXT: slli a4, a4, 48
+; RVA22U64-NEXT: slli a5, a5, 56
+; RVA22U64-NEXT: or a4, a4, a5
+; RVA22U64-NEXT: or a1, a1, a4
+; RVA22U64-NEXT: add.uw a1, a3, a1
+; RVA22U64-NEXT: lbu a2, 93(a0)
+; RVA22U64-NEXT: lbu a3, 82(a0)
+; RVA22U64-NEXT: lbu a4, 105(a0)
; RVA22U64-NEXT: lbu a0, 161(a0)
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a3
-; RVA22U64-NEXT: vlse8.v v9, (a6), zero
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a4
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a5
-; RVA22U64-NEXT: vslide1down.vx v10, v8, a7
-; RVA22U64-NEXT: vslide1down.vx v8, v9, a1
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a2
+; RVA22U64-NEXT: slli a2, a2, 8
+; RVA22U64-NEXT: or a2, a2, a3
+; RVA22U64-NEXT: slli a4, a4, 16
+; RVA22U64-NEXT: slli a0, a0, 24
+; RVA22U64-NEXT: or a0, a0, a4
+; RVA22U64-NEXT: or a0, a0, a2
+; RVA22U64-NEXT: vsetivli zero, 2, e64, m1, ta, ma
+; RVA22U64-NEXT: vmv.v.x v8, a1
; RVA22U64-NEXT: vslide1down.vx v8, v8, a0
-; RVA22U64-NEXT: li a0, 255
-; RVA22U64-NEXT: vsetvli zero, zero, e16, m2, ta, ma
-; RVA22U64-NEXT: vmv.s.x v0, a0
-; RVA22U64-NEXT: vsetvli zero, zero, e8, m1, ta, mu
-; RVA22U64-NEXT: vslidedown.vi v8, v8, 4
-; RVA22U64-NEXT: vslidedown.vi v8, v10, 8, v0.t
; RVA22U64-NEXT: ret
;
; RV64ZVE32-LABEL: buildvec_v16i8_undef_edges:
@@ -2057,35 +2095,35 @@ define <16 x i8> @buildvec_v16i8_loads_undef_scattered(ptr %p) {
;
; RVA22U64-LABEL: buildvec_v16i8_loads_undef_scattered:
; RVA22U64: # %bb.0:
-; RVA22U64-NEXT: addi a6, a0, 82
-; RVA22U64-NEXT: lbu a2, 1(a0)
+; RVA22U64-NEXT: lbu a1, 1(a0)
+; RVA22U64-NEXT: lbu a2, 0(a0)
+; RVA22U64-NEXT: slli a1, a1, 8
; RVA22U64-NEXT: lbu a3, 44(a0)
; RVA22U64-NEXT: lbu a4, 55(a0)
-; RVA22U64-NEXT: lbu t0, 75(a0)
-; RVA22U64-NEXT: lbu a7, 93(a0)
-; RVA22U64-NEXT: vsetivli zero, 16, e8, m1, ta, ma
-; RVA22U64-NEXT: vlse8.v v8, (a0), zero
-; RVA22U64-NEXT: lbu a1, 124(a0)
-; RVA22U64-NEXT: lbu a5, 144(a0)
-; RVA22U64-NEXT: lbu a0, 154(a0)
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a2
-; RVA22U64-NEXT: vslidedown.vi v8, v8, 2
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a3
-; RVA22U64-NEXT: vlse8.v v9, (a6), zero
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a4
-; RVA22U64-NEXT: vslidedown.vi v8, v8, 1
-; RVA22U64-NEXT: vslide1down.vx v10, v8, t0
-; RVA22U64-NEXT: vslide1down.vx v8, v9, a7
-; RVA22U64-NEXT: vslidedown.vi v8, v8, 2
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a1
-; RVA22U64-NEXT: vslidedown.vi v8, v8, 1
-; RVA22U64-NEXT: vslide1down.vx v8, v8, a5
-; RVA22U64-NEXT: li a1, 255
-; RVA22U64-NEXT: vsetvli zero, zero, e16, m2, ta, ma
-; RVA22U64-NEXT: vmv.s.x v0, a1
-; RVA22U64-NEXT: vsetvli zero, zero, e8, m1, ta, mu
+; RVA22U64-NEXT: or a1, a1, a2
+; RVA22U64-NEXT: lbu a2, 75(a0)
+; RVA22U64-NEXT: slli a3, a3, 32
+; RVA22U64-NEXT: slli a4, a4, 40
+; RVA22U64-NEXT: or a3, a3, a4
+; RVA22U64-NEXT: slli a2, a2, 56
+; RVA22U64-NEXT: lbu a4, 93(a0)
+; RVA22U64-NEXT: or a2, a2, a3
+; RVA22U64-NEXT: or a1, a1, a2
+; RVA22U64-NEXT: lbu a2, 82(a0)
+; RVA22U64-NEXT: slli a4, a4, 8
+; RVA22U64-NEXT: lbu a3, 144(a0)
+; RVA22U64-NEXT: lbu a5, 154(a0)
+; RVA22U64-NEXT: or a2, a2, a4
+; RVA22U64-NEXT: lbu a0, 124(a0)
+; RVA22U64-NEXT: slli a3, a3, 48
+; RVA22U64-NEXT: slli a5, a5, 56
+; RVA22U64-NEXT: or a3, a3, a5
+; RVA22U64-NEXT: slli a0, a0, 32
+; RVA22U64-NEXT: or a0, a0, a3
+; RVA22U64-NEXT: or a0, a0, a2
+; RVA22U64-NEXT: vsetivli zero, 2, e64, m1, ta, ma
+; RVA22U64-NEXT: vmv.v.x v8, a1
; RVA22U64-NEXT: vslide1down.vx v8, v8, a0
-; RVA22U64-NEXT: vslidedown.vi v8, v10, 8, v0.t
; RVA22U64-NEXT: ret
;
; RV64ZVE32-LABEL: buildvec_v16i8_loads_undef_scattered:
@@ -2171,3 +2209,240 @@ define <16 x i8> @buildvec_v16i8_loads_undef_scattered(ptr %p) {
%v16 = insertelement <16 x i8> %v15, i8 %ld16, i32 15
ret <16 x i8> %v16
}
+
+define <8 x i8> @buildvec_v8i8_pack(ptr %p, i8 %e1, i8 %e2, i8 %e3, i8 %e4, i8 %e5, i8 %e6, i8 %e7, i8 %e8) {
+; RV32-LABEL: buildvec_v8i8_pack:
+; RV32: # %bb.0:
+; RV32-NEXT: lbu a0, 0(sp)
+; RV32-NEXT: vsetivli zero, 8, e8, mf2, ta, mu
+; RV32-NEXT: vmv.v.x v8, a1
+; RV32-NEXT: vslide1down.vx v8, v8, a2
+; RV32-NEXT: vslide1down.vx v8, v8, a3
+; RV32-NEXT: vslide1down.vx v9, v8, a4
+; RV32-NEXT: vmv.v.x v8, a5
+; RV32-NEXT: vslide1down.vx v8, v8, a6
+; RV32-NEXT: vslide1down.vx v8, v8, a7
+; RV32-NEXT: vmv.v.i v0, 15
+; RV32-NEXT: vslide1down.vx v8, v8, a0
+; RV32-NEXT: vslidedown.vi v8, v9, 4, v0.t
+; RV32-NEXT: ret
+;
+; RV64V-ONLY-LABEL: buildvec_v8i8_pack:
+; RV64V-ONLY: # %bb.0:
+; RV64V-ONLY-NEXT: lbu a0, 0(sp)
+; RV64V-ONLY-NEXT: vsetivli zero, 8, e8, mf2, ta, mu
+; RV64V-ONLY-NEXT: vmv.v.x v8, a1
+; RV64V-ONLY-NEXT: vslide1down.vx v8, v8, a2
+; RV64V-ONLY-NEXT: vslide1down.vx v8, v8, a3
+; RV64V-ONLY-NEXT: vsli...
[truncated]
|
Quick note for reviewers, please pay close attention to byte order issues in this. I frequently get that wrong, and while this is inspired by a downstream patch I rewrote this basically from scratch. Definitely room for error. |
for (unsigned i = 0; i < VT.getVectorNumElements(); i += 2) { | ||
SDValue A = Op.getOperand(i); | ||
SDValue B = Op.getOperand(i + 1); | ||
if (ElemVT != XLenVT) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If ElemVT == XLenVT
then the pack
code creates a poison shift. So how can they be equal?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you're right, this condition is just dead. Will remove.
@@ -1267,43 +1267,53 @@ define <16 x i8> @buildvec_v16i8_loads_contigous(ptr %p) { | |||
; | |||
; RVA22U64-LABEL: buildvec_v16i8_loads_contigous: | |||
; RVA22U64: # %bb.0: | |||
; RVA22U64-NEXT: addi a6, a0, 8 | |||
; RVA22U64-NEXT: lbu t6, 1(a0) | |||
; RVA22U64-NEXT: lbu a1, 1(a0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we test RVA22U32?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure it adds any useful coverage, but happy to add it if desired.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I asked specifically because it allows ELEN=64 with XLen=32. But I guess the BUILD_VECTOR gets type legalized before we'll get here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't appear to have support in tree for rva22u32 (as an mattr alias). I went digging through profile docs, and I can't find any mention of the rva22u32 profile variants; rva22 appears to only define 64 bit versions.
Do you have a particular arch string for a configuration which exercises the case you're concerned about here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From your other comment, it seems like plain rv32 + v + zba + zbb might be enough here? Can you confirm?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hadn't realized that that the RVA* profiles are 64-bit only. rv32 + v + zba + zbb is enough.
|
||
unsigned NumElts = VT.getVectorNumElements(); | ||
unsigned ElemSizeInBits = ElemVT.getSizeInBits(); | ||
if (ElemSizeInBits >= Subtarget.getELen() || NumElts % 2 != 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we allow configurations where ELEN > XLEN? I couldn't find a clear answer in the specification as to whether that was a valid point in the configuration cross product.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Zve64* or V on RV32 allows that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added, and rebased. Turned out to be a good thing, as the configuration exposed a crash (which is now fixed).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM with nits
|
||
unsigned NumElts = VT.getVectorNumElements(); | ||
unsigned ElemSizeInBits = ElemVT.getSizeInBits(); | ||
if (ElemSizeInBits >= Subtarget.getELen() || NumElts % 2 != 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we ever end up with a non power of 2 VT during lowering? If not we could move NumElts % 2 != 0
to an assert
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 element vectors?
And @kito-cheng is looking at adding non power MVTs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added tests for 1 element vectors, just to be sure we had coverage here.
@topperc Have I addressed all of your concerns? This has an LGTM, but you pointed out a substantial problem after that, so I don't want to land without confirmation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
LLVM Buildbot has detected a new failure on builder Full details are available at: https://lab.llvm.org/buildbot/#/builders/51/builds/1021 Here is the relevant piece of the build log for the reference:
|
Our worst case build_vector lowering is a serial chain of vslide1down.vx operations which creates a serial dependency chain through a relatively high latency operation. We can instead pack together elements into ELEN sized chunks, and move them from integer to scalar in a single operation.
This reduces the length of the serial chain on the vector side, and costs at most three scalar instructions per element. This is a win for all cores when the sum of the latencies of the scalar instructions is less than the vslide1down.vx being replaced, and is particularly profitable for out-of-order cores which can overlap the scalar computation.
This patch is restricted to configurations with zba and zbb. Without both, the zero extend might require two instructions which would bring the total scalar instructions per element to 4. zba and zba are both present in the rva22u64 baseline which is looking to be quite common for hardware in practice; we could extend this to systems without bitmanip with a bit of extra effort.