Skip to content

[RISCV] Move RISCVInsertVSETVLI to after phi elimination #91440

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
May 15, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
262 changes: 166 additions & 96 deletions llvm/lib/Target/RISCV/RISCVInsertVSETVLI.cpp

Large diffs are not rendered by default.

9 changes: 8 additions & 1 deletion llvm/lib/Target/RISCV/RISCVTargetMachine.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -541,9 +541,16 @@ void RISCVPassConfig::addPreRegAlloc() {
addPass(createRISCVPreRAExpandPseudoPass());
if (TM->getOptLevel() != CodeGenOptLevel::None)
addPass(createRISCVMergeBaseOffsetOptPass());

addPass(createRISCVInsertReadWriteCSRPass());
addPass(createRISCVInsertWriteVXRMPass());
addPass(createRISCVInsertVSETVLIPass());

// Run RISCVInsertVSETVLI after PHI elimination. On O1 and above do it after
// register coalescing so needVSETVLIPHI doesn't need to look through COPYs.
if (TM->getOptLevel() == CodeGenOptLevel::None)
insertPass(&PHIEliminationID, createRISCVInsertVSETVLIPass());
else
insertPass(&RegisterCoalescerID, createRISCVInsertVSETVLIPass());
}

void RISCVPassConfig::addFastRegAlloc() {
Expand Down
6 changes: 4 additions & 2 deletions llvm/test/CodeGen/RISCV/O0-pipeline.ll
Original file line number Diff line number Diff line change
Expand Up @@ -42,12 +42,14 @@
; CHECK-NEXT: RISC-V Pre-RA pseudo instruction expansion pass
; CHECK-NEXT: RISC-V Insert Read/Write CSR Pass
; CHECK-NEXT: RISC-V Insert Write VXRM Pass
; CHECK-NEXT: RISC-V Insert VSETVLI pass
; CHECK-NEXT: Init Undef Pass
; CHECK-NEXT: Eliminate PHI nodes for register allocation
; CHECK-NEXT: MachineDominator Tree Construction
; CHECK-NEXT: Slot index numbering
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor, but it looks like maybe one of either two address or fast alloc isn't preserving an analysis that it probably could.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hopefully this is just temporary and will go away once we fully move it to after Fast Register Allocator and before RISC-V Coalesce VSETVLI pass, where it can reuse the live interval analysis there.

; CHECK-NEXT: Live Interval Analysis
; CHECK-NEXT: RISC-V Insert VSETVLI pass
; CHECK-NEXT: Two-Address instruction pass
; CHECK-NEXT: Fast Register Allocator
; CHECK-NEXT: MachineDominator Tree Construction
; CHECK-NEXT: Slot index numbering
; CHECK-NEXT: Live Interval Analysis
; CHECK-NEXT: RISC-V Coalesce VSETVLI pass
Expand Down
2 changes: 1 addition & 1 deletion llvm/test/CodeGen/RISCV/O3-pipeline.ll
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,6 @@
; CHECK-NEXT: RISC-V Merge Base Offset
; CHECK-NEXT: RISC-V Insert Read/Write CSR Pass
; CHECK-NEXT: RISC-V Insert Write VXRM Pass
; CHECK-NEXT: RISC-V Insert VSETVLI pass
; CHECK-NEXT: Detect Dead Lanes
; CHECK-NEXT: Init Undef Pass
; CHECK-NEXT: Process Implicit Definitions
Expand All @@ -129,6 +128,7 @@
; CHECK-NEXT: Slot index numbering
; CHECK-NEXT: Live Interval Analysis
; CHECK-NEXT: Register Coalescer
; CHECK-NEXT: RISC-V Insert VSETVLI pass
; CHECK-NEXT: Rename Disconnected Subregister Components
; CHECK-NEXT: Machine Instruction Scheduler
; CHECK-NEXT: Machine Block Frequency Analysis
Expand Down
2 changes: 1 addition & 1 deletion llvm/test/CodeGen/RISCV/rvv/combine-vmv.ll
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,8 @@ define <vscale x 4 x i32> @vadd_undef(<vscale x 4 x i32> %a, <vscale x 4 x i32>
define <vscale x 4 x i32> @vadd_same_passthru(<vscale x 4 x i32> %passthru, <vscale x 4 x i32> %a, <vscale x 4 x i32> %b, iXLen %vl1, iXLen %vl2) {
; CHECK-LABEL: vadd_same_passthru:
; CHECK: # %bb.0:
; CHECK-NEXT: vsetvli zero, a0, e32, m2, tu, ma
; CHECK-NEXT: vmv2r.v v14, v8
; CHECK-NEXT: vsetvli zero, a0, e32, m2, tu, ma
; CHECK-NEXT: vadd.vv v14, v10, v12
; CHECK-NEXT: vsetvli zero, a1, e32, m2, tu, ma
; CHECK-NEXT: vmv.v.v v8, v14
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -149,8 +149,8 @@ define void @constant_zero_stride(ptr %s, ptr %d) {
; CHECK: # %bb.0:
; CHECK-NEXT: vsetivli zero, 2, e8, mf8, ta, ma
; CHECK-NEXT: vle8.v v8, (a0)
; CHECK-NEXT: vsetivli zero, 4, e8, mf4, ta, ma
; CHECK-NEXT: vmv1r.v v9, v8
; CHECK-NEXT: vsetivli zero, 4, e8, mf4, ta, ma
; CHECK-NEXT: vslideup.vi v9, v8, 2
; CHECK-NEXT: vse8.v v9, (a1)
; CHECK-NEXT: ret
Expand Down
8 changes: 4 additions & 4 deletions llvm/test/CodeGen/RISCV/rvv/dont-sink-splat-operands.ll
Original file line number Diff line number Diff line change
Expand Up @@ -141,9 +141,9 @@ define void @sink_splat_add_scalable(ptr nocapture %a, i32 signext %x) {
; SINK-NEXT: andi a4, a3, 1024
; SINK-NEXT: xori a3, a4, 1024
; SINK-NEXT: slli a5, a5, 1
; SINK-NEXT: vsetvli a6, zero, e32, m2, ta, ma
; SINK-NEXT: mv a6, a0
; SINK-NEXT: mv a7, a3
; SINK-NEXT: vsetvli t0, zero, e32, m2, ta, ma
; SINK-NEXT: .LBB1_3: # %vector.body
; SINK-NEXT: # =>This Inner Loop Header: Depth=1
; SINK-NEXT: vl2re32.v v8, (a6)
Expand Down Expand Up @@ -183,9 +183,9 @@ define void @sink_splat_add_scalable(ptr nocapture %a, i32 signext %x) {
; DEFAULT-NEXT: andi a4, a3, 1024
; DEFAULT-NEXT: xori a3, a4, 1024
; DEFAULT-NEXT: slli a5, a5, 1
; DEFAULT-NEXT: vsetvli a6, zero, e32, m2, ta, ma
; DEFAULT-NEXT: mv a6, a0
; DEFAULT-NEXT: mv a7, a3
; DEFAULT-NEXT: vsetvli t0, zero, e32, m2, ta, ma
; DEFAULT-NEXT: .LBB1_3: # %vector.body
; DEFAULT-NEXT: # =>This Inner Loop Header: Depth=1
; DEFAULT-NEXT: vl2re32.v v8, (a6)
Expand Down Expand Up @@ -459,9 +459,9 @@ define void @sink_splat_fadd_scalable(ptr nocapture %a, float %x) {
; SINK-NEXT: addi a3, a2, -1
; SINK-NEXT: andi a4, a3, 1024
; SINK-NEXT: xori a3, a4, 1024
; SINK-NEXT: vsetvli a5, zero, e32, m1, ta, ma
; SINK-NEXT: mv a5, a0
; SINK-NEXT: mv a6, a3
; SINK-NEXT: vsetvli a7, zero, e32, m1, ta, ma
; SINK-NEXT: .LBB4_3: # %vector.body
; SINK-NEXT: # =>This Inner Loop Header: Depth=1
; SINK-NEXT: vl1re32.v v8, (a5)
Expand Down Expand Up @@ -500,9 +500,9 @@ define void @sink_splat_fadd_scalable(ptr nocapture %a, float %x) {
; DEFAULT-NEXT: addi a3, a2, -1
; DEFAULT-NEXT: andi a4, a3, 1024
; DEFAULT-NEXT: xori a3, a4, 1024
; DEFAULT-NEXT: vsetvli a5, zero, e32, m1, ta, ma
; DEFAULT-NEXT: mv a5, a0
; DEFAULT-NEXT: mv a6, a3
; DEFAULT-NEXT: vsetvli a7, zero, e32, m1, ta, ma
; DEFAULT-NEXT: .LBB4_3: # %vector.body
; DEFAULT-NEXT: # =>This Inner Loop Header: Depth=1
; DEFAULT-NEXT: vl1re32.v v8, (a5)
Expand Down
2 changes: 1 addition & 1 deletion llvm/test/CodeGen/RISCV/rvv/fixed-vectors-int.ll
Original file line number Diff line number Diff line change
Expand Up @@ -1155,8 +1155,8 @@ define void @mulhu_v8i16(ptr %x) {
; CHECK-NEXT: vle16.v v8, (a0)
; CHECK-NEXT: vmv.v.i v9, 0
; CHECK-NEXT: lui a1, 1048568
; CHECK-NEXT: vsetvli zero, zero, e16, m1, tu, ma
; CHECK-NEXT: vmv.v.i v10, 0
; CHECK-NEXT: vsetvli zero, zero, e16, m1, tu, ma
; CHECK-NEXT: vmv.s.x v10, a1
; CHECK-NEXT: vsetvli zero, zero, e16, m1, ta, ma
; CHECK-NEXT: vmv.v.i v11, 1
Expand Down
2 changes: 1 addition & 1 deletion llvm/test/CodeGen/RISCV/rvv/fixed-vectors-masked-gather.ll
Original file line number Diff line number Diff line change
Expand Up @@ -12092,8 +12092,8 @@ define <32 x i8> @mgather_baseidx_v32i8(ptr %base, <32 x i8> %idxs, <32 x i1> %m
; RV64V: # %bb.0:
; RV64V-NEXT: vsetivli zero, 16, e64, m8, ta, ma
; RV64V-NEXT: vsext.vf8 v16, v8
; RV64V-NEXT: vsetvli zero, zero, e8, m1, ta, mu
; RV64V-NEXT: vmv1r.v v12, v10
; RV64V-NEXT: vsetvli zero, zero, e8, m1, ta, mu
; RV64V-NEXT: vluxei64.v v12, (a0), v16, v0.t
; RV64V-NEXT: vsetivli zero, 16, e8, m2, ta, ma
; RV64V-NEXT: vslidedown.vi v10, v10, 16
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -369,8 +369,8 @@ define <4 x i8> @vslide1up_4xi8_neg_incorrect_insert3(<4 x i8> %v, i8 %b) {
define <2 x i8> @vslide1up_4xi8_neg_length_changing(<4 x i8> %v, i8 %b) {
; CHECK-LABEL: vslide1up_4xi8_neg_length_changing:
; CHECK: # %bb.0:
; CHECK-NEXT: vsetivli zero, 4, e8, m1, tu, ma
; CHECK-NEXT: vmv1r.v v9, v8
; CHECK-NEXT: vsetivli zero, 4, e8, m1, tu, ma
; CHECK-NEXT: vmv.s.x v9, a0
; CHECK-NEXT: vsetivli zero, 2, e8, mf8, ta, ma
; CHECK-NEXT: vslideup.vi v9, v8, 1
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -168,8 +168,8 @@ define void @strided_constant_0(ptr %x, ptr %z) {
; CHECK: # %bb.0:
; CHECK-NEXT: vsetivli zero, 4, e16, mf2, ta, ma
; CHECK-NEXT: vle16.v v8, (a0)
; CHECK-NEXT: vsetivli zero, 8, e16, m1, ta, ma
; CHECK-NEXT: vmv1r.v v9, v8
; CHECK-NEXT: vsetivli zero, 8, e16, m1, ta, ma
; CHECK-NEXT: vslideup.vi v9, v8, 4
; CHECK-NEXT: vse16.v v9, (a1)
; CHECK-NEXT: ret
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -62,8 +62,8 @@ define void @gather_masked(ptr noalias nocapture %A, ptr noalias nocapture reado
; CHECK-NEXT: li a4, 5
; CHECK-NEXT: .LBB1_1: # %vector.body
; CHECK-NEXT: # =>This Inner Loop Header: Depth=1
; CHECK-NEXT: vsetvli zero, a3, e8, m1, ta, mu
; CHECK-NEXT: vmv1r.v v9, v8
; CHECK-NEXT: vsetvli zero, a3, e8, m1, ta, mu
; CHECK-NEXT: vlse8.v v9, (a1), a4, v0.t
; CHECK-NEXT: vle8.v v10, (a0)
; CHECK-NEXT: vadd.vv v9, v10, v9
Expand Down
2 changes: 1 addition & 1 deletion llvm/test/CodeGen/RISCV/rvv/fixed-vectors-trunc-vp.ll
Original file line number Diff line number Diff line change
Expand Up @@ -394,14 +394,14 @@ define <128 x i32> @vtrunc_v128i32_v128i64(<128 x i64> %a, <128 x i1> %m, i32 ze
; CHECK-NEXT: # %bb.11:
; CHECK-NEXT: li a1, 32
; CHECK-NEXT: .LBB16_12:
; CHECK-NEXT: vsetvli zero, a3, e32, m8, ta, ma
; CHECK-NEXT: csrr a4, vlenb
; CHECK-NEXT: li a5, 24
; CHECK-NEXT: mul a4, a4, a5
; CHECK-NEXT: add a4, sp, a4
; CHECK-NEXT: addi a4, a4, 16
; CHECK-NEXT: vl8r.v v8, (a4) # Unknown-size Folded Reload
; CHECK-NEXT: vmv4r.v v24, v8
; CHECK-NEXT: vsetvli zero, a3, e32, m8, ta, ma
; CHECK-NEXT: csrr a4, vlenb
; CHECK-NEXT: li a5, 56
; CHECK-NEXT: mul a4, a4, a5
Expand Down
4 changes: 2 additions & 2 deletions llvm/test/CodeGen/RISCV/rvv/fold-scalar-load-crash.ll
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@ define i32 @test(i32 %size, ptr %add.ptr, i64 %const) {
; RV32-NEXT: .LBB0_1: # %for.body
; RV32-NEXT: # =>This Inner Loop Header: Depth=1
; RV32-NEXT: vmv.s.x v9, zero
; RV32-NEXT: vsetvli zero, a1, e8, mf2, tu, ma
; RV32-NEXT: vmv1r.v v10, v8
; RV32-NEXT: vsetvli zero, a1, e8, mf2, tu, ma
; RV32-NEXT: vslideup.vx v10, v9, a2
; RV32-NEXT: vsetivli zero, 8, e8, mf2, tu, ma
; RV32-NEXT: vmv.s.x v10, a0
Expand All @@ -40,8 +40,8 @@ define i32 @test(i32 %size, ptr %add.ptr, i64 %const) {
; RV64-NEXT: .LBB0_1: # %for.body
; RV64-NEXT: # =>This Inner Loop Header: Depth=1
; RV64-NEXT: vmv.s.x v9, zero
; RV64-NEXT: vsetvli zero, a1, e8, mf2, tu, ma
; RV64-NEXT: vmv1r.v v10, v8
; RV64-NEXT: vsetvli zero, a1, e8, mf2, tu, ma
; RV64-NEXT: vslideup.vx v10, v9, a2
; RV64-NEXT: vsetivli zero, 8, e8, mf2, tu, ma
; RV64-NEXT: vmv.s.x v10, a0
Expand Down
12 changes: 6 additions & 6 deletions llvm/test/CodeGen/RISCV/rvv/fpclamptosat_vec.ll
Original file line number Diff line number Diff line change
Expand Up @@ -479,11 +479,11 @@ define <4 x i32> @stest_f16i32(<4 x half> %x) {
; CHECK-V-NEXT: addi a0, sp, 16
; CHECK-V-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
; CHECK-V-NEXT: vslideup.vi v10, v8, 1
; CHECK-V-NEXT: vsetivli zero, 4, e64, m2, ta, ma
; CHECK-V-NEXT: csrr a0, vlenb
; CHECK-V-NEXT: add a0, sp, a0
; CHECK-V-NEXT: addi a0, a0, 16
; CHECK-V-NEXT: vl2r.v v8, (a0) # Unknown-size Folded Reload
; CHECK-V-NEXT: vsetivli zero, 4, e64, m2, ta, ma
; CHECK-V-NEXT: vslideup.vi v10, v8, 2
; CHECK-V-NEXT: vsetvli zero, zero, e32, m1, ta, ma
; CHECK-V-NEXT: vnclip.wi v8, v10, 0
Expand Down Expand Up @@ -640,11 +640,11 @@ define <4 x i32> @utesth_f16i32(<4 x half> %x) {
; CHECK-V-NEXT: addi a0, sp, 16
; CHECK-V-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
; CHECK-V-NEXT: vslideup.vi v10, v8, 1
; CHECK-V-NEXT: vsetivli zero, 4, e64, m2, ta, ma
; CHECK-V-NEXT: csrr a0, vlenb
; CHECK-V-NEXT: add a0, sp, a0
; CHECK-V-NEXT: addi a0, a0, 16
; CHECK-V-NEXT: vl2r.v v8, (a0) # Unknown-size Folded Reload
; CHECK-V-NEXT: vsetivli zero, 4, e64, m2, ta, ma
; CHECK-V-NEXT: vslideup.vi v10, v8, 2
; CHECK-V-NEXT: vsetvli zero, zero, e32, m1, ta, ma
; CHECK-V-NEXT: vnclipu.wi v8, v10, 0
Expand Down Expand Up @@ -811,11 +811,11 @@ define <4 x i32> @ustest_f16i32(<4 x half> %x) {
; CHECK-V-NEXT: addi a0, sp, 16
; CHECK-V-NEXT: vl1r.v v9, (a0) # Unknown-size Folded Reload
; CHECK-V-NEXT: vslideup.vi v8, v9, 1
; CHECK-V-NEXT: vsetivli zero, 4, e64, m2, ta, ma
; CHECK-V-NEXT: csrr a0, vlenb
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This diff doesn't matter, but is another instance where moving the vsetvli maximally backward would remove a test diff.

; CHECK-V-NEXT: add a0, sp, a0
; CHECK-V-NEXT: addi a0, a0, 16
; CHECK-V-NEXT: vl2r.v v10, (a0) # Unknown-size Folded Reload
; CHECK-V-NEXT: vsetivli zero, 4, e64, m2, ta, ma
; CHECK-V-NEXT: vslideup.vi v8, v10, 2
; CHECK-V-NEXT: li a0, -1
; CHECK-V-NEXT: srli a0, a0, 32
Expand Down Expand Up @@ -3850,11 +3850,11 @@ define <4 x i32> @stest_f16i32_mm(<4 x half> %x) {
; CHECK-V-NEXT: addi a0, sp, 16
; CHECK-V-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
; CHECK-V-NEXT: vslideup.vi v10, v8, 1
; CHECK-V-NEXT: vsetivli zero, 4, e64, m2, ta, ma
; CHECK-V-NEXT: csrr a0, vlenb
; CHECK-V-NEXT: add a0, sp, a0
; CHECK-V-NEXT: addi a0, a0, 16
; CHECK-V-NEXT: vl2r.v v8, (a0) # Unknown-size Folded Reload
; CHECK-V-NEXT: vsetivli zero, 4, e64, m2, ta, ma
; CHECK-V-NEXT: vslideup.vi v10, v8, 2
; CHECK-V-NEXT: vsetvli zero, zero, e32, m1, ta, ma
; CHECK-V-NEXT: vnclip.wi v8, v10, 0
Expand Down Expand Up @@ -4009,11 +4009,11 @@ define <4 x i32> @utesth_f16i32_mm(<4 x half> %x) {
; CHECK-V-NEXT: addi a0, sp, 16
; CHECK-V-NEXT: vl1r.v v8, (a0) # Unknown-size Folded Reload
; CHECK-V-NEXT: vslideup.vi v10, v8, 1
; CHECK-V-NEXT: vsetivli zero, 4, e64, m2, ta, ma
; CHECK-V-NEXT: csrr a0, vlenb
; CHECK-V-NEXT: add a0, sp, a0
; CHECK-V-NEXT: addi a0, a0, 16
; CHECK-V-NEXT: vl2r.v v8, (a0) # Unknown-size Folded Reload
; CHECK-V-NEXT: vsetivli zero, 4, e64, m2, ta, ma
; CHECK-V-NEXT: vslideup.vi v10, v8, 2
; CHECK-V-NEXT: vsetvli zero, zero, e32, m1, ta, ma
; CHECK-V-NEXT: vnclipu.wi v8, v10, 0
Expand Down Expand Up @@ -4179,11 +4179,11 @@ define <4 x i32> @ustest_f16i32_mm(<4 x half> %x) {
; CHECK-V-NEXT: addi a0, sp, 16
; CHECK-V-NEXT: vl1r.v v9, (a0) # Unknown-size Folded Reload
; CHECK-V-NEXT: vslideup.vi v8, v9, 1
; CHECK-V-NEXT: vsetivli zero, 4, e64, m2, ta, ma
; CHECK-V-NEXT: csrr a0, vlenb
; CHECK-V-NEXT: add a0, sp, a0
; CHECK-V-NEXT: addi a0, a0, 16
; CHECK-V-NEXT: vl2r.v v10, (a0) # Unknown-size Folded Reload
; CHECK-V-NEXT: vsetivli zero, 4, e64, m2, ta, ma
; CHECK-V-NEXT: vslideup.vi v8, v10, 2
; CHECK-V-NEXT: li a0, -1
; CHECK-V-NEXT: srli a0, a0, 32
Expand Down
4 changes: 2 additions & 2 deletions llvm/test/CodeGen/RISCV/rvv/rv32-spill-vector-csr.ll
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,8 @@ define <vscale x 1 x double> @foo(<vscale x 1 x double> %a, <vscale x 1 x double
; SPILL-O0-NEXT: add a1, sp, a1
; SPILL-O0-NEXT: addi a1, a1, 16
; SPILL-O0-NEXT: vs1r.v v9, (a1) # Unknown-size Folded Spill
; SPILL-O0-NEXT: vsetvli zero, a0, e64, m1, ta, ma
; SPILL-O0-NEXT: # implicit-def: $v8
; SPILL-O0-NEXT: vsetvli zero, a0, e64, m1, ta, ma
; SPILL-O0-NEXT: vfadd.vv v8, v9, v10
; SPILL-O0-NEXT: addi a0, sp, 16
; SPILL-O0-NEXT: vs1r.v v8, (a0) # Unknown-size Folded Spill
Expand All @@ -37,8 +37,8 @@ define <vscale x 1 x double> @foo(<vscale x 1 x double> %a, <vscale x 1 x double
; SPILL-O0-NEXT: vl1r.v v9, (a1) # Unknown-size Folded Reload
; SPILL-O0-NEXT: # kill: def $x11 killed $x10
; SPILL-O0-NEXT: lw a0, 8(sp) # 4-byte Folded Reload
; SPILL-O0-NEXT: vsetvli zero, a0, e64, m1, ta, ma
; SPILL-O0-NEXT: # implicit-def: $v8
; SPILL-O0-NEXT: vsetvli zero, a0, e64, m1, ta, ma
; SPILL-O0-NEXT: vfadd.vv v8, v9, v10
; SPILL-O0-NEXT: csrr a0, vlenb
; SPILL-O0-NEXT: slli a0, a0, 1
Expand Down
10 changes: 5 additions & 5 deletions llvm/test/CodeGen/RISCV/rvv/rv32-spill-zvlsseg.ll
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@ define <vscale x 1 x i32> @spill_zvlsseg_nxv1i32(ptr %base, i32 %vl) nounwind {
; SPILL-O0-NEXT: csrr a2, vlenb
; SPILL-O0-NEXT: slli a2, a2, 1
; SPILL-O0-NEXT: sub sp, sp, a2
; SPILL-O0-NEXT: vsetvli zero, a1, e32, mf2, ta, ma
; SPILL-O0-NEXT: # implicit-def: $v8_v9
; SPILL-O0-NEXT: vsetvli zero, a1, e32, mf2, tu, ma
; SPILL-O0-NEXT: vlseg2e32.v v8, (a0)
; SPILL-O0-NEXT: vmv1r.v v8, v9
; SPILL-O0-NEXT: addi a0, sp, 16
Expand Down Expand Up @@ -90,8 +90,8 @@ define <vscale x 2 x i32> @spill_zvlsseg_nxv2i32(ptr %base, i32 %vl) nounwind {
; SPILL-O0-NEXT: csrr a2, vlenb
; SPILL-O0-NEXT: slli a2, a2, 1
; SPILL-O0-NEXT: sub sp, sp, a2
; SPILL-O0-NEXT: vsetvli zero, a1, e32, m1, ta, ma
; SPILL-O0-NEXT: # implicit-def: $v8_v9
; SPILL-O0-NEXT: vsetvli zero, a1, e32, m1, tu, ma
; SPILL-O0-NEXT: vlseg2e32.v v8, (a0)
; SPILL-O0-NEXT: vmv1r.v v8, v9
; SPILL-O0-NEXT: addi a0, sp, 16
Expand Down Expand Up @@ -167,8 +167,8 @@ define <vscale x 4 x i32> @spill_zvlsseg_nxv4i32(ptr %base, i32 %vl) nounwind {
; SPILL-O0-NEXT: csrr a2, vlenb
; SPILL-O0-NEXT: slli a2, a2, 1
; SPILL-O0-NEXT: sub sp, sp, a2
; SPILL-O0-NEXT: vsetvli zero, a1, e32, m2, ta, ma
; SPILL-O0-NEXT: # implicit-def: $v8m2_v10m2
; SPILL-O0-NEXT: vsetvli zero, a1, e32, m2, tu, ma
; SPILL-O0-NEXT: vlseg2e32.v v8, (a0)
; SPILL-O0-NEXT: vmv2r.v v8, v10
; SPILL-O0-NEXT: addi a0, sp, 16
Expand Down Expand Up @@ -247,8 +247,8 @@ define <vscale x 8 x i32> @spill_zvlsseg_nxv8i32(ptr %base, i32 %vl) nounwind {
; SPILL-O0-NEXT: csrr a2, vlenb
; SPILL-O0-NEXT: slli a2, a2, 2
; SPILL-O0-NEXT: sub sp, sp, a2
; SPILL-O0-NEXT: vsetvli zero, a1, e32, m4, ta, ma
; SPILL-O0-NEXT: # implicit-def: $v8m4_v12m4
; SPILL-O0-NEXT: vsetvli zero, a1, e32, m4, tu, ma
; SPILL-O0-NEXT: vlseg2e32.v v8, (a0)
; SPILL-O0-NEXT: vmv4r.v v8, v12
; SPILL-O0-NEXT: addi a0, sp, 16
Expand Down Expand Up @@ -327,8 +327,8 @@ define <vscale x 4 x i32> @spill_zvlsseg3_nxv4i32(ptr %base, i32 %vl) nounwind {
; SPILL-O0-NEXT: csrr a2, vlenb
; SPILL-O0-NEXT: slli a2, a2, 1
; SPILL-O0-NEXT: sub sp, sp, a2
; SPILL-O0-NEXT: vsetvli zero, a1, e32, m2, ta, ma
; SPILL-O0-NEXT: # implicit-def: $v8m2_v10m2_v12m2
; SPILL-O0-NEXT: vsetvli zero, a1, e32, m2, tu, ma
; SPILL-O0-NEXT: vlseg3e32.v v8, (a0)
; SPILL-O0-NEXT: vmv2r.v v8, v10
; SPILL-O0-NEXT: addi a0, sp, 16
Expand Down
4 changes: 2 additions & 2 deletions llvm/test/CodeGen/RISCV/rvv/rv64-spill-vector-csr.ll
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,8 @@ define <vscale x 1 x double> @foo(<vscale x 1 x double> %a, <vscale x 1 x double
; SPILL-O0-NEXT: add a1, sp, a1
; SPILL-O0-NEXT: addi a1, a1, 32
; SPILL-O0-NEXT: vs1r.v v9, (a1) # Unknown-size Folded Spill
; SPILL-O0-NEXT: vsetvli zero, a0, e64, m1, ta, ma
; SPILL-O0-NEXT: # implicit-def: $v8
; SPILL-O0-NEXT: vsetvli zero, a0, e64, m1, ta, ma
; SPILL-O0-NEXT: vfadd.vv v8, v9, v10
; SPILL-O0-NEXT: addi a0, sp, 32
; SPILL-O0-NEXT: vs1r.v v8, (a0) # Unknown-size Folded Spill
Expand All @@ -40,8 +40,8 @@ define <vscale x 1 x double> @foo(<vscale x 1 x double> %a, <vscale x 1 x double
; SPILL-O0-NEXT: vl1r.v v9, (a1) # Unknown-size Folded Reload
; SPILL-O0-NEXT: # kill: def $x11 killed $x10
; SPILL-O0-NEXT: ld a0, 16(sp) # 8-byte Folded Reload
; SPILL-O0-NEXT: vsetvli zero, a0, e64, m1, ta, ma
; SPILL-O0-NEXT: # implicit-def: $v8
; SPILL-O0-NEXT: vsetvli zero, a0, e64, m1, ta, ma
; SPILL-O0-NEXT: vfadd.vv v8, v9, v10
; SPILL-O0-NEXT: csrr a0, vlenb
; SPILL-O0-NEXT: slli a0, a0, 1
Expand Down
Loading
Loading