-
Notifications
You must be signed in to change notification settings - Fork 14.3k
[RISCV] Call SimplifyDemandedBits on the scalar input of vmv_s_x_vl #131711
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Thank you for submitting a Pull Request (PR) to the LLVM Project! This PR will be automatically labeled and the relevant teams will be notified. If you wish to, you can add reviewers by using the "Reviewers" section on this page. If this is not working for you, it is probably because you do not have write permissions for the repository. In which case you can instead tag reviewers by name in a comment by using If you have received no comments on your PR for a week, you can request a review by "ping"ing the PR by adding a comment “Ping”. The common courtesy "ping" rate is once a week. Please remember that you are asking for valuable time from other developers. If you have further questions, they may be answered by the LLVM GitHub User Guide. You can also ask questions in a comment on this PR, on the LLVM Discord or on the forums. |
@llvm/pr-subscribers-backend-risc-v Author: ming (yanming123456) ChangesIf the number of trailing one bits of The vmv.s.x instruction copies the scalar integer register to element 0 of the destination vector register. If SEW < XLEN, the least-significant bits are copied and the upper XLEN-SEW bits are ignored. Patch is 20.58 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/131711.diff 3 Files Affected:
diff --git a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
index 27a4bbce1f5fc..9f6ab3fbfc9b2 100644
--- a/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelLowering.cpp
@@ -19183,6 +19183,19 @@ SDValue RISCVTargetLowering::PerformDAGCombine(SDNode *N,
SDValue Scalar = N->getOperand(1);
SDValue VL = N->getOperand(2);
+ // The vmv.s.x instruction copies the scalar integer register to element 0
+ // of the destination vector register. If SEW < XLEN, the least-significant
+ // bits are copied and the upper XLEN-SEW bits are ignored.
+ //
+ // Stripping AND sdnode.
+ if (Scalar.getOpcode() == ISD::AND &&
+ isa<ConstantSDNode>(Scalar->getOperand(1))) {
+ if (Scalar.getConstantOperandAPInt(1).countr_one() >=
+ VT.getScalarSizeInBits())
+ return DAG.getNode(RISCVISD::VMV_S_X_VL, DL, VT, Passthru,
+ Scalar.getOperand(0), VL);
+ }
+
if (Scalar.getOpcode() == RISCVISD::VMV_X_S && Passthru.isUndef() &&
Scalar.getOperand(0).getValueType() == N->getValueType(0))
return Scalar.getOperand(0);
diff --git a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-int-vp.ll b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-int-vp.ll
index f920e39e7d295..52dd87068b0c8 100644
--- a/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-int-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/fixed-vectors-reduction-int-vp.ll
@@ -24,7 +24,6 @@ declare i8 @llvm.vp.reduce.umax.v2i8(i8, <2 x i8>, <2 x i1>, i32)
define signext i8 @vpreduce_umax_v2i8(i8 signext %s, <2 x i8> %v, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_v2i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: andi a0, a0, 255
; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv.s.x v9, a0
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -55,7 +54,6 @@ declare i8 @llvm.vp.reduce.umin.v2i8(i8, <2 x i8>, <2 x i1>, i32)
define signext i8 @vpreduce_umin_v2i8(i8 signext %s, <2 x i8> %v, <2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umin_v2i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: andi a0, a0, 255
; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv.s.x v9, a0
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -131,7 +129,6 @@ declare i8 @llvm.vp.reduce.umin.v3i8(i8, <3 x i8>, <3 x i1>, i32)
define signext i8 @vpreduce_umin_v3i8(i8 signext %s, <3 x i8> %v, <3 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umin_v3i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: andi a0, a0, 255
; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv.s.x v9, a0
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -162,7 +159,6 @@ declare i8 @llvm.vp.reduce.umax.v4i8(i8, <4 x i8>, <4 x i1>, i32)
define signext i8 @vpreduce_umax_v4i8(i8 signext %s, <4 x i8> %v, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_v4i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: andi a0, a0, 255
; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv.s.x v9, a0
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -193,7 +189,6 @@ declare i8 @llvm.vp.reduce.umin.v4i8(i8, <4 x i8>, <4 x i1>, i32)
define signext i8 @vpreduce_umin_v4i8(i8 signext %s, <4 x i8> %v, <4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umin_v4i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: andi a0, a0, 255
; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv.s.x v9, a0
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -282,27 +277,14 @@ define signext i16 @vpreduce_add_v2i16(i16 signext %s, <2 x i16> %v, <2 x i1> %m
declare i16 @llvm.vp.reduce.umax.v2i16(i16, <2 x i16>, <2 x i1>, i32)
define signext i16 @vpreduce_umax_v2i16(i16 signext %s, <2 x i16> %v, <2 x i1> %m, i32 zeroext %evl) {
-; RV32-LABEL: vpreduce_umax_v2i16:
-; RV32: # %bb.0:
-; RV32-NEXT: slli a0, a0, 16
-; RV32-NEXT: srli a0, a0, 16
-; RV32-NEXT: vsetivli zero, 1, e16, m1, ta, ma
-; RV32-NEXT: vmv.s.x v9, a0
-; RV32-NEXT: vsetvli zero, a1, e16, mf4, ta, ma
-; RV32-NEXT: vredmaxu.vs v9, v8, v9, v0.t
-; RV32-NEXT: vmv.x.s a0, v9
-; RV32-NEXT: ret
-;
-; RV64-LABEL: vpreduce_umax_v2i16:
-; RV64: # %bb.0:
-; RV64-NEXT: slli a0, a0, 48
-; RV64-NEXT: srli a0, a0, 48
-; RV64-NEXT: vsetivli zero, 1, e16, m1, ta, ma
-; RV64-NEXT: vmv.s.x v9, a0
-; RV64-NEXT: vsetvli zero, a1, e16, mf4, ta, ma
-; RV64-NEXT: vredmaxu.vs v9, v8, v9, v0.t
-; RV64-NEXT: vmv.x.s a0, v9
-; RV64-NEXT: ret
+; CHECK-LABEL: vpreduce_umax_v2i16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 1, e16, m1, ta, ma
+; CHECK-NEXT: vmv.s.x v9, a0
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, ma
+; CHECK-NEXT: vredmaxu.vs v9, v8, v9, v0.t
+; CHECK-NEXT: vmv.x.s a0, v9
+; CHECK-NEXT: ret
%r = call i16 @llvm.vp.reduce.umax.v2i16(i16 %s, <2 x i16> %v, <2 x i1> %m, i32 %evl)
ret i16 %r
}
@@ -325,27 +307,14 @@ define signext i16 @vpreduce_smax_v2i16(i16 signext %s, <2 x i16> %v, <2 x i1> %
declare i16 @llvm.vp.reduce.umin.v2i16(i16, <2 x i16>, <2 x i1>, i32)
define signext i16 @vpreduce_umin_v2i16(i16 signext %s, <2 x i16> %v, <2 x i1> %m, i32 zeroext %evl) {
-; RV32-LABEL: vpreduce_umin_v2i16:
-; RV32: # %bb.0:
-; RV32-NEXT: slli a0, a0, 16
-; RV32-NEXT: srli a0, a0, 16
-; RV32-NEXT: vsetivli zero, 1, e16, m1, ta, ma
-; RV32-NEXT: vmv.s.x v9, a0
-; RV32-NEXT: vsetvli zero, a1, e16, mf4, ta, ma
-; RV32-NEXT: vredminu.vs v9, v8, v9, v0.t
-; RV32-NEXT: vmv.x.s a0, v9
-; RV32-NEXT: ret
-;
-; RV64-LABEL: vpreduce_umin_v2i16:
-; RV64: # %bb.0:
-; RV64-NEXT: slli a0, a0, 48
-; RV64-NEXT: srli a0, a0, 48
-; RV64-NEXT: vsetivli zero, 1, e16, m1, ta, ma
-; RV64-NEXT: vmv.s.x v9, a0
-; RV64-NEXT: vsetvli zero, a1, e16, mf4, ta, ma
-; RV64-NEXT: vredminu.vs v9, v8, v9, v0.t
-; RV64-NEXT: vmv.x.s a0, v9
-; RV64-NEXT: ret
+; CHECK-LABEL: vpreduce_umin_v2i16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 1, e16, m1, ta, ma
+; CHECK-NEXT: vmv.s.x v9, a0
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, ma
+; CHECK-NEXT: vredminu.vs v9, v8, v9, v0.t
+; CHECK-NEXT: vmv.x.s a0, v9
+; CHECK-NEXT: ret
%r = call i16 @llvm.vp.reduce.umin.v2i16(i16 %s, <2 x i16> %v, <2 x i1> %m, i32 %evl)
ret i16 %r
}
@@ -428,27 +397,14 @@ define signext i16 @vpreduce_add_v4i16(i16 signext %s, <4 x i16> %v, <4 x i1> %m
declare i16 @llvm.vp.reduce.umax.v4i16(i16, <4 x i16>, <4 x i1>, i32)
define signext i16 @vpreduce_umax_v4i16(i16 signext %s, <4 x i16> %v, <4 x i1> %m, i32 zeroext %evl) {
-; RV32-LABEL: vpreduce_umax_v4i16:
-; RV32: # %bb.0:
-; RV32-NEXT: slli a0, a0, 16
-; RV32-NEXT: srli a0, a0, 16
-; RV32-NEXT: vsetivli zero, 1, e16, m1, ta, ma
-; RV32-NEXT: vmv.s.x v9, a0
-; RV32-NEXT: vsetvli zero, a1, e16, mf2, ta, ma
-; RV32-NEXT: vredmaxu.vs v9, v8, v9, v0.t
-; RV32-NEXT: vmv.x.s a0, v9
-; RV32-NEXT: ret
-;
-; RV64-LABEL: vpreduce_umax_v4i16:
-; RV64: # %bb.0:
-; RV64-NEXT: slli a0, a0, 48
-; RV64-NEXT: srli a0, a0, 48
-; RV64-NEXT: vsetivli zero, 1, e16, m1, ta, ma
-; RV64-NEXT: vmv.s.x v9, a0
-; RV64-NEXT: vsetvli zero, a1, e16, mf2, ta, ma
-; RV64-NEXT: vredmaxu.vs v9, v8, v9, v0.t
-; RV64-NEXT: vmv.x.s a0, v9
-; RV64-NEXT: ret
+; CHECK-LABEL: vpreduce_umax_v4i16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 1, e16, m1, ta, ma
+; CHECK-NEXT: vmv.s.x v9, a0
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, ma
+; CHECK-NEXT: vredmaxu.vs v9, v8, v9, v0.t
+; CHECK-NEXT: vmv.x.s a0, v9
+; CHECK-NEXT: ret
%r = call i16 @llvm.vp.reduce.umax.v4i16(i16 %s, <4 x i16> %v, <4 x i1> %m, i32 %evl)
ret i16 %r
}
@@ -471,27 +427,14 @@ define signext i16 @vpreduce_smax_v4i16(i16 signext %s, <4 x i16> %v, <4 x i1> %
declare i16 @llvm.vp.reduce.umin.v4i16(i16, <4 x i16>, <4 x i1>, i32)
define signext i16 @vpreduce_umin_v4i16(i16 signext %s, <4 x i16> %v, <4 x i1> %m, i32 zeroext %evl) {
-; RV32-LABEL: vpreduce_umin_v4i16:
-; RV32: # %bb.0:
-; RV32-NEXT: slli a0, a0, 16
-; RV32-NEXT: srli a0, a0, 16
-; RV32-NEXT: vsetivli zero, 1, e16, m1, ta, ma
-; RV32-NEXT: vmv.s.x v9, a0
-; RV32-NEXT: vsetvli zero, a1, e16, mf2, ta, ma
-; RV32-NEXT: vredminu.vs v9, v8, v9, v0.t
-; RV32-NEXT: vmv.x.s a0, v9
-; RV32-NEXT: ret
-;
-; RV64-LABEL: vpreduce_umin_v4i16:
-; RV64: # %bb.0:
-; RV64-NEXT: slli a0, a0, 48
-; RV64-NEXT: srli a0, a0, 48
-; RV64-NEXT: vsetivli zero, 1, e16, m1, ta, ma
-; RV64-NEXT: vmv.s.x v9, a0
-; RV64-NEXT: vsetvli zero, a1, e16, mf2, ta, ma
-; RV64-NEXT: vredminu.vs v9, v8, v9, v0.t
-; RV64-NEXT: vmv.x.s a0, v9
-; RV64-NEXT: ret
+; CHECK-LABEL: vpreduce_umin_v4i16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 1, e16, m1, ta, ma
+; CHECK-NEXT: vmv.s.x v9, a0
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, ma
+; CHECK-NEXT: vredminu.vs v9, v8, v9, v0.t
+; CHECK-NEXT: vmv.x.s a0, v9
+; CHECK-NEXT: ret
%r = call i16 @llvm.vp.reduce.umin.v4i16(i16 %s, <4 x i16> %v, <4 x i1> %m, i32 %evl)
ret i16 %r
}
diff --git a/llvm/test/CodeGen/RISCV/rvv/vreductions-int-vp.ll b/llvm/test/CodeGen/RISCV/rvv/vreductions-int-vp.ll
index eacfce098bddb..7c6782fc1dcd4 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vreductions-int-vp.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vreductions-int-vp.ll
@@ -24,7 +24,6 @@ declare i8 @llvm.vp.reduce.umax.nxv1i8(i8, <vscale x 1 x i8>, <vscale x 1 x i1>,
define signext i8 @vpreduce_umax_nxv1i8(i8 signext %s, <vscale x 1 x i8> %v, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_nxv1i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: andi a0, a0, 255
; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv.s.x v9, a0
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -55,7 +54,6 @@ declare i8 @llvm.vp.reduce.umin.nxv1i8(i8, <vscale x 1 x i8>, <vscale x 1 x i1>,
define signext i8 @vpreduce_umin_nxv1i8(i8 signext %s, <vscale x 1 x i8> %v, <vscale x 1 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umin_nxv1i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: andi a0, a0, 255
; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv.s.x v9, a0
; CHECK-NEXT: vsetvli zero, a1, e8, mf8, ta, ma
@@ -146,7 +144,6 @@ declare i8 @llvm.vp.reduce.umax.nxv2i8(i8, <vscale x 2 x i8>, <vscale x 2 x i1>,
define signext i8 @vpreduce_umax_nxv2i8(i8 signext %s, <vscale x 2 x i8> %v, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_nxv2i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: andi a0, a0, 255
; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv.s.x v9, a0
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -177,7 +174,6 @@ declare i8 @llvm.vp.reduce.umin.nxv2i8(i8, <vscale x 2 x i8>, <vscale x 2 x i1>,
define signext i8 @vpreduce_umin_nxv2i8(i8 signext %s, <vscale x 2 x i8> %v, <vscale x 2 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umin_nxv2i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: andi a0, a0, 255
; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv.s.x v9, a0
; CHECK-NEXT: vsetvli zero, a1, e8, mf4, ta, ma
@@ -283,7 +279,6 @@ declare i8 @llvm.vp.reduce.umax.nxv4i8(i8, <vscale x 4 x i8>, <vscale x 4 x i1>,
define signext i8 @vpreduce_umax_nxv4i8(i8 signext %s, <vscale x 4 x i8> %v, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umax_nxv4i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: andi a0, a0, 255
; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv.s.x v9, a0
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -314,7 +309,6 @@ declare i8 @llvm.vp.reduce.umin.nxv4i8(i8, <vscale x 4 x i8>, <vscale x 4 x i1>,
define signext i8 @vpreduce_umin_nxv4i8(i8 signext %s, <vscale x 4 x i8> %v, <vscale x 4 x i1> %m, i32 zeroext %evl) {
; CHECK-LABEL: vpreduce_umin_nxv4i8:
; CHECK: # %bb.0:
-; CHECK-NEXT: andi a0, a0, 255
; CHECK-NEXT: vsetivli zero, 1, e8, m1, ta, ma
; CHECK-NEXT: vmv.s.x v9, a0
; CHECK-NEXT: vsetvli zero, a1, e8, mf2, ta, ma
@@ -403,27 +397,14 @@ define signext i16 @vpreduce_add_nxv1i16(i16 signext %s, <vscale x 1 x i16> %v,
declare i16 @llvm.vp.reduce.umax.nxv1i16(i16, <vscale x 1 x i16>, <vscale x 1 x i1>, i32)
define signext i16 @vpreduce_umax_nxv1i16(i16 signext %s, <vscale x 1 x i16> %v, <vscale x 1 x i1> %m, i32 zeroext %evl) {
-; RV32-LABEL: vpreduce_umax_nxv1i16:
-; RV32: # %bb.0:
-; RV32-NEXT: slli a0, a0, 16
-; RV32-NEXT: srli a0, a0, 16
-; RV32-NEXT: vsetivli zero, 1, e16, m1, ta, ma
-; RV32-NEXT: vmv.s.x v9, a0
-; RV32-NEXT: vsetvli zero, a1, e16, mf4, ta, ma
-; RV32-NEXT: vredmaxu.vs v9, v8, v9, v0.t
-; RV32-NEXT: vmv.x.s a0, v9
-; RV32-NEXT: ret
-;
-; RV64-LABEL: vpreduce_umax_nxv1i16:
-; RV64: # %bb.0:
-; RV64-NEXT: slli a0, a0, 48
-; RV64-NEXT: srli a0, a0, 48
-; RV64-NEXT: vsetivli zero, 1, e16, m1, ta, ma
-; RV64-NEXT: vmv.s.x v9, a0
-; RV64-NEXT: vsetvli zero, a1, e16, mf4, ta, ma
-; RV64-NEXT: vredmaxu.vs v9, v8, v9, v0.t
-; RV64-NEXT: vmv.x.s a0, v9
-; RV64-NEXT: ret
+; CHECK-LABEL: vpreduce_umax_nxv1i16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 1, e16, m1, ta, ma
+; CHECK-NEXT: vmv.s.x v9, a0
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, ma
+; CHECK-NEXT: vredmaxu.vs v9, v8, v9, v0.t
+; CHECK-NEXT: vmv.x.s a0, v9
+; CHECK-NEXT: ret
%r = call i16 @llvm.vp.reduce.umax.nxv1i16(i16 %s, <vscale x 1 x i16> %v, <vscale x 1 x i1> %m, i32 %evl)
ret i16 %r
}
@@ -446,27 +427,14 @@ define signext i16 @vpreduce_smax_nxv1i16(i16 signext %s, <vscale x 1 x i16> %v,
declare i16 @llvm.vp.reduce.umin.nxv1i16(i16, <vscale x 1 x i16>, <vscale x 1 x i1>, i32)
define signext i16 @vpreduce_umin_nxv1i16(i16 signext %s, <vscale x 1 x i16> %v, <vscale x 1 x i1> %m, i32 zeroext %evl) {
-; RV32-LABEL: vpreduce_umin_nxv1i16:
-; RV32: # %bb.0:
-; RV32-NEXT: slli a0, a0, 16
-; RV32-NEXT: srli a0, a0, 16
-; RV32-NEXT: vsetivli zero, 1, e16, m1, ta, ma
-; RV32-NEXT: vmv.s.x v9, a0
-; RV32-NEXT: vsetvli zero, a1, e16, mf4, ta, ma
-; RV32-NEXT: vredminu.vs v9, v8, v9, v0.t
-; RV32-NEXT: vmv.x.s a0, v9
-; RV32-NEXT: ret
-;
-; RV64-LABEL: vpreduce_umin_nxv1i16:
-; RV64: # %bb.0:
-; RV64-NEXT: slli a0, a0, 48
-; RV64-NEXT: srli a0, a0, 48
-; RV64-NEXT: vsetivli zero, 1, e16, m1, ta, ma
-; RV64-NEXT: vmv.s.x v9, a0
-; RV64-NEXT: vsetvli zero, a1, e16, mf4, ta, ma
-; RV64-NEXT: vredminu.vs v9, v8, v9, v0.t
-; RV64-NEXT: vmv.x.s a0, v9
-; RV64-NEXT: ret
+; CHECK-LABEL: vpreduce_umin_nxv1i16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 1, e16, m1, ta, ma
+; CHECK-NEXT: vmv.s.x v9, a0
+; CHECK-NEXT: vsetvli zero, a1, e16, mf4, ta, ma
+; CHECK-NEXT: vredminu.vs v9, v8, v9, v0.t
+; CHECK-NEXT: vmv.x.s a0, v9
+; CHECK-NEXT: ret
%r = call i16 @llvm.vp.reduce.umin.nxv1i16(i16 %s, <vscale x 1 x i16> %v, <vscale x 1 x i1> %m, i32 %evl)
ret i16 %r
}
@@ -549,27 +517,14 @@ define signext i16 @vpreduce_add_nxv2i16(i16 signext %s, <vscale x 2 x i16> %v,
declare i16 @llvm.vp.reduce.umax.nxv2i16(i16, <vscale x 2 x i16>, <vscale x 2 x i1>, i32)
define signext i16 @vpreduce_umax_nxv2i16(i16 signext %s, <vscale x 2 x i16> %v, <vscale x 2 x i1> %m, i32 zeroext %evl) {
-; RV32-LABEL: vpreduce_umax_nxv2i16:
-; RV32: # %bb.0:
-; RV32-NEXT: slli a0, a0, 16
-; RV32-NEXT: srli a0, a0, 16
-; RV32-NEXT: vsetivli zero, 1, e16, m1, ta, ma
-; RV32-NEXT: vmv.s.x v9, a0
-; RV32-NEXT: vsetvli zero, a1, e16, mf2, ta, ma
-; RV32-NEXT: vredmaxu.vs v9, v8, v9, v0.t
-; RV32-NEXT: vmv.x.s a0, v9
-; RV32-NEXT: ret
-;
-; RV64-LABEL: vpreduce_umax_nxv2i16:
-; RV64: # %bb.0:
-; RV64-NEXT: slli a0, a0, 48
-; RV64-NEXT: srli a0, a0, 48
-; RV64-NEXT: vsetivli zero, 1, e16, m1, ta, ma
-; RV64-NEXT: vmv.s.x v9, a0
-; RV64-NEXT: vsetvli zero, a1, e16, mf2, ta, ma
-; RV64-NEXT: vredmaxu.vs v9, v8, v9, v0.t
-; RV64-NEXT: vmv.x.s a0, v9
-; RV64-NEXT: ret
+; CHECK-LABEL: vpreduce_umax_nxv2i16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 1, e16, m1, ta, ma
+; CHECK-NEXT: vmv.s.x v9, a0
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, ma
+; CHECK-NEXT: vredmaxu.vs v9, v8, v9, v0.t
+; CHECK-NEXT: vmv.x.s a0, v9
+; CHECK-NEXT: ret
%r = call i16 @llvm.vp.reduce.umax.nxv2i16(i16 %s, <vscale x 2 x i16> %v, <vscale x 2 x i1> %m, i32 %evl)
ret i16 %r
}
@@ -592,27 +547,14 @@ define signext i16 @vpreduce_smax_nxv2i16(i16 signext %s, <vscale x 2 x i16> %v,
declare i16 @llvm.vp.reduce.umin.nxv2i16(i16, <vscale x 2 x i16>, <vscale x 2 x i1>, i32)
define signext i16 @vpreduce_umin_nxv2i16(i16 signext %s, <vscale x 2 x i16> %v, <vscale x 2 x i1> %m, i32 zeroext %evl) {
-; RV32-LABEL: vpreduce_umin_nxv2i16:
-; RV32: # %bb.0:
-; RV32-NEXT: slli a0, a0, 16
-; RV32-NEXT: srli a0, a0, 16
-; RV32-NEXT: vsetivli zero, 1, e16, m1, ta, ma
-; RV32-NEXT: vmv.s.x v9, a0
-; RV32-NEXT: vsetvli zero, a1, e16, mf2, ta, ma
-; RV32-NEXT: vredminu.vs v9, v8, v9, v0.t
-; RV32-NEXT: vmv.x.s a0, v9
-; RV32-NEXT: ret
-;
-; RV64-LABEL: vpreduce_umin_nxv2i16:
-; RV64: # %bb.0:
-; RV64-NEXT: slli a0, a0, 48
-; RV64-NEXT: srli a0, a0, 48
-; RV64-NEXT: vsetivli zero, 1, e16, m1, ta, ma
-; RV64-NEXT: vmv.s.x v9, a0
-; RV64-NEXT: vsetvli zero, a1, e16, mf2, ta, ma
-; RV64-NEXT: vredminu.vs v9, v8, v9, v0.t
-; RV64-NEXT: vmv.x.s a0, v9
-; RV64-NEXT: ret
+; CHECK-LABEL: vpreduce_umin_nxv2i16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 1, e16, m1, ta, ma
+; CHECK-NEXT: vmv.s.x v9, a0
+; CHECK-NEXT: vsetvli zero, a1, e16, mf2, ta, ma
+; CHECK-NEXT: vredminu.vs v9, v8, v9, v0.t
+; CHECK-NEXT: vmv.x.s a0, v9
+; CHECK-NEXT: ret
%r = call i16 @llvm.vp.reduce.umin.nxv2i16(i16 %s, <vscale x 2 x i16> %v, <vscale x 2 x i1> %m, i32 %evl)
ret i16 %r
}
@@ -695,27 +637,14 @@ define signext i16 @vpreduce_add_nxv4i16(i16 signext %s, <vscale x 4 x i16> %v,
declare i16 @llvm.vp.reduce.umax.nxv4i16(i16, <vscale x 4 x i16>, <vscale x 4 x i1>, i32)
define signext i16 @vpreduce_umax_nxv4i16(i16 signext %s, <vscale x 4 x i16> %v, <vscale x 4 x i1> %m, i32 zeroext %evl) {
-; RV32-LABEL: vpreduce_umax_nxv4i16:
-; RV32: # %bb.0:
-; RV32-NEXT: slli a0, a0, 16
-; RV32-NEXT: srli a0, a0, 16
-; RV32-NEXT: vsetivli zero, 1, e16, m1, ta, ma
-; RV32-NEXT: vmv.s.x v9, a0
-; RV32-NEXT: vsetvli zero, a1, e16, m1, ta, ma
-; RV32-NEXT: vredmaxu.vs v9, v8, v9, v0.t
-; RV32-NEXT: vmv.x.s a0, v9
-; RV32-NEXT: ret
-;
-; RV64-LABEL: vpreduce_umax_nxv4i16:
-; RV64: # %bb.0:
-; RV64-NEXT: slli a0, a0, 48
-; RV64-NEXT: srli a0, a0, 48
-; RV64-NEXT: vsetivli zero, 1, e16, m1, ta, ma
-; RV64-NEXT: vmv.s.x v9, a0
-; RV64-NEXT: vsetvli zero, a1, e16, m1, ta, ma
-; RV64-NEXT: vredmaxu.vs v9, v8, v9, v0.t
-; RV64-NEXT: vmv.x.s a0, v9
-; RV64-NEXT: ret
+; CHECK-LABEL: vpreduce_umax_nxv4i16:
+; CHECK: # %bb.0:
+; CHECK-NEXT: vsetivli zero, 1, e16, m1, ta, ma
+; CHECK-NEXT: vmv.s.x v9, a0
+; CHECK-NEXT: vsetvli zero, a1, e16, m1, ta, ma
+; CHECK-NEXT: vredmaxu.vs v9, v8, v9, v0.t
+; CHECK-NEXT: vmv.x.s a0, v9
+; CHECK-NEXT: ret
%r = call i16 @llvm.vp.reduce.umax.nxv4i16(i16 %s, <vscale x 4 x i16> %v, <vscale x 4 x i1> %m, i32 %evl)
ret i16 %r
}
@@ -738,27 +667,14 @@ define signext i16 @vpreduce_smax_nxv4i16(i16 signext %s, <vscale x 4 x i16> %v,
declare i16 @llvm.vp.reduce.umin.nxv4i16(i16, <vscale x 4 x i16>, <vscale x 4 x i1>, i32)
define signext i16 @vpreduce_umin_nxv4i16(i16 signext %s, <vscale x 4 x i16> %v, <vscale x 4 x i1> %m, i32 zeroext %evl) {
-; RV32-LABEL: vpreduce_umin_nxv4i16:
-; RV32: # %bb.0:
-; RV32-NEXT: slli a0, a0, 16
-; RV32-NEXT: srli a0, a0, 16
-; RV32-NEXT: vsetivli zero, 1, e16, m1, ta, ma
-; RV32-NEXT: vmv.s.x v9, a0
-; RV32-NEXT: vsetvli zero, a1, e16, m1, ta, ma
-; RV32-NEXT: vredminu.vs v9, v8, v9, v0.t
-; RV32-NEXT: vmv.x.s a0, v9
-; RV32-NEXT: ret
-;
-; RV64-LABEL: vpreduce_umin_nxv4i16:
-; RV64: # %bb.0:
-; RV64-NEXT: slli a0, a0, 48
-; RV64-NEXT: srli a0, a0, 48
-; RV64-NEXT: vsetivli zero, 1, e16, m1, ta, ma
-; RV64-NEXT: vmv...
[truncated]
|
We should call SimplifyDemandedBits on the vmv_s_x_vl input rather than checking specifically for ISD::AND. Similar code is here where
|
Do we need to care whether the merge operand is undef? |
I don't think so. I don't know why that's in there. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Should update the PR title |
366c883
to
08f8e07
Compare
Maybe "[RISCV] Call SimplifyDemandedBits on the scalar input of vmv_x_s_vl" |
The vmv.s.x instruction copies the scalar integer register to element 0 of the destination vector register. If SEW < XLEN, the least-significant bits are copied and the upper XLEN-SEW bits are ignored.
I accidentally wrote vmv_x_s_vl instead of vmv_s_x_vl. I have fixed the PR title. |
@yanming123456 Congratulations on having your first Pull Request (PR) merged into the LLVM Project! Your changes will be combined with recent changes from other authors, then tested by our build bots. If there is a problem with a build, you may receive a report in an email or a comment on this PR. Please check whether problems have been caused by your change specifically, as the builds can include changes from many authors. It is not uncommon for your change to be included in a build that fails due to someone else's changes, or infrastructure issues. How to do this, and the rest of the post-merge process, is covered in detail here. If your change does cause a problem, it may be reverted, or you can revert it yourself. This is a normal part of LLVM development. You can fix your changes and open a new PR to merge them again. If you don't get any reports, no action is required from you. Your changes are working as expected, well done! |
LLVM Buildbot has detected a new failure on builder Full details are available at: https://lab.llvm.org/buildbot/#/builders/81/builds/5386 Here is the relevant piece of the build log for the reference
|
The vmv.s.x instruction copies the scalar integer register to element 0 of the destination vector register. If SEW < XLEN, the least-significant bits are copied and the upper XLEN-SEW bits are ignored.