Skip to content

[RISCV] vsetvl pseudo may cross inline asm without sideeffect #97794

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from

Conversation

BeMg
Copy link
Contributor

@BeMg BeMg commented Jul 5, 2024

After removing side effects from the vsetvl pseudo instruction in commit ff313ee, an issue arose where vsetvl could cross inline assembly during post-RA machine scheduling. This patch reintroduces hasSideEffects=1 to prevent this undesired behavior.

@BeMg BeMg requested review from lukel97 and topperc July 5, 2024 07:16
@llvmbot
Copy link
Member

llvmbot commented Jul 5, 2024

@llvm/pr-subscribers-backend-risc-v

Author: Piyou Chen (BeMg)

Changes

After removing side effects from the vsetvl pseudo instruction in commit ff313ee, an issue arose where vsetvl could cross inline assembly during post-RA machine scheduling. This patch reintroduces hasSideEffects=1 to prevent this undesired behavior.


Full diff: https://github.com/llvm/llvm-project/pull/97794.diff

4 Files Affected:

  • (modified) llvm/lib/Target/RISCV/RISCVDeadRegisterDefinitions.cpp (+1-3)
  • (modified) llvm/lib/Target/RISCV/RISCVInstrInfoVPseudos.td (+1-1)
  • (added) llvm/test/CodeGen/RISCV/rvv/vsetvl-cross-inline-asm.ll (+37)
  • (modified) llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-crossbb.ll (+19-18)
diff --git a/llvm/lib/Target/RISCV/RISCVDeadRegisterDefinitions.cpp b/llvm/lib/Target/RISCV/RISCVDeadRegisterDefinitions.cpp
index 7de48d8218f06..5e6b7891449fe 100644
--- a/llvm/lib/Target/RISCV/RISCVDeadRegisterDefinitions.cpp
+++ b/llvm/lib/Target/RISCV/RISCVDeadRegisterDefinitions.cpp
@@ -72,9 +72,7 @@ bool RISCVDeadRegisterDefinitions::runOnMachineFunction(MachineFunction &MF) {
       // are reserved for HINT instructions.
       const MCInstrDesc &Desc = MI.getDesc();
       if (!Desc.mayLoad() && !Desc.mayStore() &&
-          !Desc.hasUnmodeledSideEffects() &&
-          MI.getOpcode() != RISCV::PseudoVSETVLI &&
-          MI.getOpcode() != RISCV::PseudoVSETIVLI)
+          !Desc.hasUnmodeledSideEffects())
         continue;
       // For PseudoVSETVLIX0, Rd = X0 has special meaning.
       if (MI.getOpcode() == RISCV::PseudoVSETVLIX0)
diff --git a/llvm/lib/Target/RISCV/RISCVInstrInfoVPseudos.td b/llvm/lib/Target/RISCV/RISCVInstrInfoVPseudos.td
index 42d6b03968d74..c61b514fa865e 100644
--- a/llvm/lib/Target/RISCV/RISCVInstrInfoVPseudos.td
+++ b/llvm/lib/Target/RISCV/RISCVInstrInfoVPseudos.td
@@ -6011,7 +6011,7 @@ let hasSideEffects = 0, mayLoad = 0, mayStore = 0, Size = 0,
 //===----------------------------------------------------------------------===//
 
 // Pseudos.
-let hasSideEffects = 0, mayLoad = 0, mayStore = 0, Defs = [VL, VTYPE] in {
+let hasSideEffects = 1, mayLoad = 0, mayStore = 0, Defs = [VL, VTYPE] in {
 // Due to rs1=X0 having special meaning, we need a GPRNoX0 register class for
 // the when we aren't using one of the special X0 encodings. Otherwise it could
 // be accidentally be made X0 by MachineIR optimizations. To satisfy the
diff --git a/llvm/test/CodeGen/RISCV/rvv/vsetvl-cross-inline-asm.ll b/llvm/test/CodeGen/RISCV/rvv/vsetvl-cross-inline-asm.ll
new file mode 100644
index 0000000000000..f24759078d945
--- /dev/null
+++ b/llvm/test/CodeGen/RISCV/rvv/vsetvl-cross-inline-asm.ll
@@ -0,0 +1,37 @@
+; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
+; RUN: llc -mtriple=riscv64 -mcpu=sifive-x280 -verify-machineinstrs < %s | FileCheck %s
+
+define void @foo(<vscale x 8 x half> %0, <vscale x 8 x half> %1) {
+; CHECK-LABEL: foo:
+; CHECK:       # %bb.0: # %entry
+; CHECK-NEXT:    vsetvli a0, zero, e32, m4, ta, ma
+; CHECK-NEXT:    vmv.v.i v16, 0
+; CHECK-NEXT:    lui a0, 1
+; CHECK-NEXT:    addiw a0, a0, -1096
+; CHECK-NEXT:    vmv.v.i v12, 0
+; CHECK-NEXT:    vsetvli zero, a0, e8, m1, ta, ma
+; CHECK-NEXT:    #APP
+; CHECK-NEXT:    vfmadd.vv v16, v12, v12
+; CHECK-NEXT:    #NO_APP
+; CHECK-NEXT:    #APP
+; CHECK-NEXT:    vfmadd.vv v16, v12, v12
+; CHECK-NEXT:    #NO_APP
+; CHECK-NEXT:    vsetvli zero, a0, e16, m2, ta, ma
+; CHECK-NEXT:    vse16.v v8, (zero)
+; CHECK-NEXT:    ret
+entry:
+  %2 = tail call i64 @llvm.riscv.vsetvli.i64(i64 3000, i64 0, i64 0)
+  %3 = tail call <vscale x 8 x float> asm sideeffect "vfmadd.vv $0, $1, $2", "=^vr,^vr,^vr,0"(<vscale x 8 x float> zeroinitializer, <vscale x 8 x float> zeroinitializer, <vscale x 8 x float> zeroinitializer)
+  %4 = tail call <vscale x 8 x float> asm sideeffect "vfmadd.vv $0, $1, $2", "=^vr,^vr,^vr,0"(<vscale x 8 x float> zeroinitializer, <vscale x 8 x float> zeroinitializer, <vscale x 8 x float> %3)
+  tail call void @llvm.riscv.vse.nxv8f16.i64(<vscale x 8 x half> %0, ptr null, i64 %2)
+  ret void
+}
+
+; Function Attrs: nocallback nofree nosync nounwind willreturn memory(none)
+declare i64 @llvm.riscv.vsetvli.i64(i64, i64 immarg, i64 immarg) #0
+
+; Function Attrs: nocallback nofree nosync nounwind willreturn memory(argmem: write)
+declare void @llvm.riscv.vse.nxv8f16.i64(<vscale x 8 x half>, ptr nocapture, i64) #1
+
+attributes #0 = { nocallback nofree nosync nounwind willreturn memory(none) }
+attributes #1 = { nocallback nofree nosync nounwind willreturn memory(argmem: write) }
diff --git a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-crossbb.ll b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-crossbb.ll
index f93022c9d132d..b48a828ff73ef 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-crossbb.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vsetvli-insert-crossbb.ll
@@ -242,6 +242,7 @@ define <vscale x 1 x double> @test6(i64 %avl, i8 zeroext %cond, <vscale x 1 x do
 ; CHECK-NEXT:    andi a1, a1, 2
 ; CHECK-NEXT:    beqz a1, .LBB5_4
 ; CHECK-NEXT:  .LBB5_2: # %if.then4
+; CHECK-NEXT:    vsetvli zero, a0, e64, m1, ta, ma
 ; CHECK-NEXT:    lui a1, %hi(.LCPI5_0)
 ; CHECK-NEXT:    addi a1, a1, %lo(.LCPI5_0)
 ; CHECK-NEXT:    vlse64.v v9, (a1), zero
@@ -631,22 +632,22 @@ declare <vscale x 2 x i32> @llvm.riscv.vwadd.w.nxv2i32.nxv2i16(<vscale x 2 x i32
 define void @vlmax(i64 %N, ptr %c, ptr %a, ptr %b) {
 ; CHECK-LABEL: vlmax:
 ; CHECK:       # %bb.0: # %entry
+; CHECK-NEXT:    vsetvli a6, zero, e64, m1, ta, ma
 ; CHECK-NEXT:    blez a0, .LBB12_3
 ; CHECK-NEXT:  # %bb.1: # %for.body.preheader
-; CHECK-NEXT:    li a4, 0
-; CHECK-NEXT:    vsetvli a6, zero, e64, m1, ta, ma
-; CHECK-NEXT:    slli a5, a6, 3
+; CHECK-NEXT:    li a5, 0
+; CHECK-NEXT:    slli a4, a6, 3
 ; CHECK-NEXT:  .LBB12_2: # %for.body
 ; CHECK-NEXT:    # =>This Inner Loop Header: Depth=1
 ; CHECK-NEXT:    vle64.v v8, (a2)
 ; CHECK-NEXT:    vle64.v v9, (a3)
 ; CHECK-NEXT:    vfadd.vv v8, v8, v9
 ; CHECK-NEXT:    vse64.v v8, (a1)
-; CHECK-NEXT:    add a4, a4, a6
-; CHECK-NEXT:    add a1, a1, a5
-; CHECK-NEXT:    add a3, a3, a5
-; CHECK-NEXT:    add a2, a2, a5
-; CHECK-NEXT:    blt a4, a0, .LBB12_2
+; CHECK-NEXT:    add a5, a5, a6
+; CHECK-NEXT:    add a1, a1, a4
+; CHECK-NEXT:    add a3, a3, a4
+; CHECK-NEXT:    add a2, a2, a4
+; CHECK-NEXT:    blt a5, a0, .LBB12_2
 ; CHECK-NEXT:  .LBB12_3: # %for.end
 ; CHECK-NEXT:    ret
 entry:
@@ -678,18 +679,18 @@ for.end:                                          ; preds = %for.body, %entry
 define void @vector_init_vlmax(i64 %N, ptr %c) {
 ; CHECK-LABEL: vector_init_vlmax:
 ; CHECK:       # %bb.0: # %entry
+; CHECK-NEXT:    vsetvli a2, zero, e64, m1, ta, ma
 ; CHECK-NEXT:    blez a0, .LBB13_3
 ; CHECK-NEXT:  # %bb.1: # %for.body.preheader
-; CHECK-NEXT:    li a2, 0
-; CHECK-NEXT:    vsetvli a3, zero, e64, m1, ta, ma
-; CHECK-NEXT:    slli a4, a3, 3
+; CHECK-NEXT:    li a3, 0
+; CHECK-NEXT:    slli a4, a2, 3
 ; CHECK-NEXT:    vmv.v.i v8, 0
 ; CHECK-NEXT:  .LBB13_2: # %for.body
 ; CHECK-NEXT:    # =>This Inner Loop Header: Depth=1
 ; CHECK-NEXT:    vse64.v v8, (a1)
-; CHECK-NEXT:    add a2, a2, a3
+; CHECK-NEXT:    add a3, a3, a2
 ; CHECK-NEXT:    add a1, a1, a4
-; CHECK-NEXT:    blt a2, a0, .LBB13_2
+; CHECK-NEXT:    blt a3, a0, .LBB13_2
 ; CHECK-NEXT:  .LBB13_3: # %for.end
 ; CHECK-NEXT:    ret
 entry:
@@ -714,20 +715,20 @@ for.end:                                          ; preds = %for.body, %entry
 define void @vector_init_vsetvli_N(i64 %N, ptr %c) {
 ; CHECK-LABEL: vector_init_vsetvli_N:
 ; CHECK:       # %bb.0: # %entry
+; CHECK-NEXT:    vsetvli a2, a0, e64, m1, ta, ma
 ; CHECK-NEXT:    blez a0, .LBB14_3
 ; CHECK-NEXT:  # %bb.1: # %for.body.preheader
-; CHECK-NEXT:    li a2, 0
-; CHECK-NEXT:    vsetvli a3, a0, e64, m1, ta, ma
-; CHECK-NEXT:    slli a4, a3, 3
+; CHECK-NEXT:    li a3, 0
+; CHECK-NEXT:    slli a4, a2, 3
 ; CHECK-NEXT:    vsetvli a5, zero, e64, m1, ta, ma
 ; CHECK-NEXT:    vmv.v.i v8, 0
 ; CHECK-NEXT:  .LBB14_2: # %for.body
 ; CHECK-NEXT:    # =>This Inner Loop Header: Depth=1
 ; CHECK-NEXT:    vsetvli zero, a0, e64, m1, ta, ma
 ; CHECK-NEXT:    vse64.v v8, (a1)
-; CHECK-NEXT:    add a2, a2, a3
+; CHECK-NEXT:    add a3, a3, a2
 ; CHECK-NEXT:    add a1, a1, a4
-; CHECK-NEXT:    blt a2, a0, .LBB14_2
+; CHECK-NEXT:    blt a3, a0, .LBB14_2
 ; CHECK-NEXT:  .LBB14_3: # %for.end
 ; CHECK-NEXT:    ret
 entry:

@wangpc-pp
Copy link
Contributor

I don't know if I understand correctly, correct me if I am wrong.
I think the behavior is correct, since the scheduler doesn't see the dependency of vl for vfmadd.vv instruction.

@lukel97
Copy link
Contributor

lukel97 commented Jul 5, 2024

Does the inline assembly need to model somehow that it uses vl and vtype? Marking them as clobbered seems to fix it for me (although I don't think it's the same as an input)

define void @foo(<vscale x 8 x half> %0, <vscale x 8 x half> %1) {
entry:
  %2 = tail call i64 @llvm.riscv.vsetvli.i64(i64 3000, i64 0, i64 0)
  %3 = tail call <vscale x 8 x float> asm sideeffect "vfmadd.vv $0, $1, $2", "=^vr,^vr,^vr,0,~{vl},~{vtype}"(<vscale x 8 x float> zeroinitializer, <vscale x 8 x float> zeroinitializer, <vscale x 8 x float> zeroinitializer)
  %4 = tail call <vscale x 8 x float> asm sideeffect "vfmadd.vv $0, $1, $2", "=^vr,^vr,^vr,0,~{vl},~{vtype}"(<vscale x 8 x float> zeroinitializer, <vscale x 8 x float> zeroinitializer, <vscale x 8 x float> %3)
  tail call void @llvm.riscv.vse.nxv8f16.i64(<vscale x 8 x half> %0, ptr null, i64 %2)
  ret void
}

Not opposed to adding back the side effect if we need to keep real world code working though. I guess I'm surprised that the machine scheduler would move asm with side effects past an implicit-def.

@topperc
Copy link
Collaborator

topperc commented Jul 5, 2024

How does gcc handle this same code? Does it make any guarantees?

@kito-cheng
Copy link
Member

GCC no guarantee on this situation as well, I guess best way should be model the dep in the inline asm rather than mark vsetvli with sideeffect.

@kito-cheng
Copy link
Member

Hmm, I am also thinking that we may add few more word on the intrinsic document side, to encourage.suggest user add those CSR as dependency or clobber.

# Mixing inline assembly and intrinsics
The compiler will be conservative to registers (vtype, vxrm, frm) when encountering inline assembly. Users should be aware that mixing uses of intrinsics and inline assembly will result in extra save and restore.

https://github.com/riscv-non-isa/rvv-intrinsic-doc/blob/main/doc/rvv-intrinsic-spec.adoc#mixing-inline-assembly-and-intrinsics

@wangpc-pp
Copy link
Contributor

Maybe we can add a new inline asm constraint that will add implicit vl/vtype operands?

@kito-cheng
Copy link
Member

Maybe we can add a new inline asm constraint that will add implicit vl/vtype operands?

That sounds one possible direction, but need few more brainstorm :P

Two options in my mind so far:

  1. Encoding correct VTYPE value in inline asm operand:
#include <riscv_vector.h>

void foo(int32_t *a, int32_t *b, int32_t *c, size_t vl){
   size_t avl = __riscv_vsetvl_e32m1(vl);
   vint32m1_t va, vb, vc;
   vb = __riscv_vle32_v_i32m1(b, avl);
   vc = __riscv_vle32_v_i32m1(c, avl);
   asm ("vadd.vv %0, %1, %2"
        : "=vr" (va)
        : "vr" (vb), "vr" (vc), "r"(avl), "VT"(ENCODING_VTYPE(SEW32, LMUL1))
        );

   __riscv_vse32_v_i32m1(a, va, avl);
}

But that means we need handle inline asm within vsetvli insertion.

  1. Extend the syntax to allow the physical register appeared in the input operand:
#include <riscv_vector.h>

void foo(int32_t *a, int32_t *b, int32_t *c, size_t vl){
   size_t avl = __riscv_vsetvl_e32m1(vl);
   vint32m1_t va, vb, vc;
   vb = __riscv_vle32_v_i32m1(b, avl);
   vc = __riscv_vle32_v_i32m1(c, avl);
   asm ("vadd.vv %0, %1, %2"
        : "=vr" (va)
        : "vr" (vb), "vr" (vc), "r"(avl), "vtype");

   __riscv_vse32_v_i32m1(a, va, avl);
}

@BeMg
Copy link
Contributor Author

BeMg commented Jul 23, 2024

Due to don't changed the side effect of vsetvl pseudo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants