Skip to content

[AMDGPU] don't mark control-flow intrinsics as convergent #90026

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 6, 2024

Conversation

ssahasra
Copy link
Collaborator

This is really a workaround to allow control flow lowering in the presence of convergence control tokens. Control-flow intrinsics in LLVM IR are convergent because they indirectly represent the wave CFG, i.e., sets of threads that are "converged" or "execute in lock-step". But they exist during a small window in the lowering process, inserted after the structurizer and then translated to equivalent MIR pseudos. So rather than create convergence tokens for these builtins, we simply mark them as not convergent.

The corresponding MIR pseudos are marked as having side effects, which is sufficient to prevent optimizations without having to mark them as convergent.

@llvmbot llvmbot added backend:AMDGPU llvm:globalisel llvm:ir llvm:analysis Includes value tracking, cost tables and constant folding labels Apr 25, 2024
@llvmbot
Copy link
Member

llvmbot commented Apr 25, 2024

@llvm/pr-subscribers-llvm-analysis
@llvm/pr-subscribers-llvm-globalisel
@llvm/pr-subscribers-backend-amdgpu

@llvm/pr-subscribers-llvm-ir

Author: Sameer Sahasrabuddhe (ssahasra)

Changes

This is really a workaround to allow control flow lowering in the presence of convergence control tokens. Control-flow intrinsics in LLVM IR are convergent because they indirectly represent the wave CFG, i.e., sets of threads that are "converged" or "execute in lock-step". But they exist during a small window in the lowering process, inserted after the structurizer and then translated to equivalent MIR pseudos. So rather than create convergence tokens for these builtins, we simply mark them as not convergent.

The corresponding MIR pseudos are marked as having side effects, which is sufficient to prevent optimizations without having to mark them as convergent.


Patch is 98.97 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/90026.diff

17 Files Affected:

  • (modified) llvm/include/llvm/IR/IntrinsicsAMDGPU.td (+17-5)
  • (modified) llvm/test/Analysis/UniformityAnalysis/AMDGPU/MIR/hidden-diverge-gmir.mir (+8-8)
  • (modified) llvm/test/Analysis/UniformityAnalysis/AMDGPU/MIR/temporal-divergence.mir (+14-14)
  • (modified) llvm/test/Analysis/UniformityAnalysis/AMDGPU/MIR/uses-value-from-cycle.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-divergent-i1-phis-no-lane-mask-merging.mir (+12-12)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-divergent-i1-used-outside-loop.mir (+56-56)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-structurizer.mir (+67-67)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-temporal-divergent-i1.mir (+20-20)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-temporal-divergent-reg.mir (+6-6)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-atomicrmw.ll (+5-5)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-function-args.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-amdgcn.if-invalid.mir (+12-12)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-amdgcn.if.xfail.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-brcond.mir (+11-11)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankselect-amdgcn.else.32.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankselect-amdgcn.else.64.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/convergence-tokens.ll (+3-3)
diff --git a/llvm/include/llvm/IR/IntrinsicsAMDGPU.td b/llvm/include/llvm/IR/IntrinsicsAMDGPU.td
index ee9a5d7a343980..8398a689b8029b 100644
--- a/llvm/include/llvm/IR/IntrinsicsAMDGPU.td
+++ b/llvm/include/llvm/IR/IntrinsicsAMDGPU.td
@@ -3143,25 +3143,37 @@ def int_amdgcn_cvt_sr_fp8_f32 : ClangBuiltin<"__builtin_amdgcn_cvt_sr_fp8_f32">,
 // Special Intrinsics for backend internal use only. No frontend
 // should emit calls to these.
 // ===----------------------------------------------------------------------===//
+//
+// This is really a workaround to allow control flow lowering in the presence of
+// convergence control tokens. Control-flow intrinsics in LLVM IR are convergent
+// because they represent the wave CFG, i.e., sets of threads that are
+// "converged" or "execute in lock-step". But they exist during a small window
+// in the lowering process, inserted after the structurizer and then translated
+// to equivalent MIR pseudos. So rather than create convergence tokens for these
+// builtins, we simply mark them as not convergent.
+//
+// The corresponding MIR pseudos are marked as having side effects, which is
+// sufficient to prevent optimizations without having to mark them as
+// convergent.
 def int_amdgcn_if : Intrinsic<[llvm_i1_ty, llvm_anyint_ty],
-  [llvm_i1_ty], [IntrConvergent, IntrWillReturn, IntrNoCallback, IntrNoFree]
+  [llvm_i1_ty], [IntrWillReturn, IntrNoCallback, IntrNoFree]
 >;
 
 def int_amdgcn_else : Intrinsic<[llvm_i1_ty, llvm_anyint_ty],
-  [llvm_anyint_ty], [IntrConvergent, IntrWillReturn, IntrNoCallback, IntrNoFree]
+  [llvm_anyint_ty], [IntrWillReturn, IntrNoCallback, IntrNoFree]
 >;
 
 def int_amdgcn_if_break : Intrinsic<[llvm_anyint_ty],
   [llvm_i1_ty, LLVMMatchType<0>],
-  [IntrNoMem, IntrConvergent, IntrWillReturn, IntrNoCallback, IntrNoFree]
+  [IntrNoMem, IntrWillReturn, IntrNoCallback, IntrNoFree]
 >;
 
 def int_amdgcn_loop : Intrinsic<[llvm_i1_ty],
-  [llvm_anyint_ty], [IntrConvergent, IntrWillReturn, IntrNoCallback, IntrNoFree]
+  [llvm_anyint_ty], [IntrWillReturn, IntrNoCallback, IntrNoFree]
 >;
 
 def int_amdgcn_end_cf : Intrinsic<[], [llvm_anyint_ty],
-  [IntrConvergent, IntrWillReturn, IntrNoCallback, IntrNoFree]>;
+  [IntrWillReturn, IntrNoCallback, IntrNoFree]>;
 
 // Represent unreachable in a divergent region.
 def int_amdgcn_unreachable : Intrinsic<[], [], [IntrConvergent, IntrNoCallback, IntrNoFree]>;
diff --git a/llvm/test/Analysis/UniformityAnalysis/AMDGPU/MIR/hidden-diverge-gmir.mir b/llvm/test/Analysis/UniformityAnalysis/AMDGPU/MIR/hidden-diverge-gmir.mir
index 17757b99ccab49..ce00edf3363f77 100644
--- a/llvm/test/Analysis/UniformityAnalysis/AMDGPU/MIR/hidden-diverge-gmir.mir
+++ b/llvm/test/Analysis/UniformityAnalysis/AMDGPU/MIR/hidden-diverge-gmir.mir
@@ -4,16 +4,16 @@
 # CHECK: DIVERGENT: %{{[0-9]*}}: %{{[0-9]*}}:_(s32) = G_INTRINSIC intrinsic(@llvm.amdgcn.workitem.id.x)
 # CHECK: DIVERGENT: %{{[0-9]*}}: %{{[0-9]*}}:_(s1) = G_ICMP intpred(slt)
 # CHECK: DIVERGENT: %{{[0-9]*}}: %{{[0-9]*}}:_(s1) = G_XOR %{{[0-9]*}}:_, %{{[0-9]*}}:_
-# CHECK: DIVERGENT: %{{[0-9]*}}: %{{[0-9]*}}:_(s1), %{{[0-9]*}}:_(s64) = G_INTRINSIC_CONVERGENT_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.if)
-# CHECK: DIVERGENT: %{{[0-9]*}}: %{{[0-9]*}}:_(s1), %{{[0-9]*}}:_(s64) = G_INTRINSIC_CONVERGENT_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.if)
+# CHECK: DIVERGENT: %{{[0-9]*}}: %{{[0-9]*}}:_(s1), %{{[0-9]*}}:_(s64) = G_INTRINSIC_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.if)
+# CHECK: DIVERGENT: %{{[0-9]*}}: %{{[0-9]*}}:_(s1), %{{[0-9]*}}:_(s64) = G_INTRINSIC_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.if)
 # CHECK: DIVERGENT: G_BRCOND %{{[0-9]*}}:_(s1), %bb.1
 # CHECK: DIVERGENT: G_BR %bb.2
 # CHECK-LABEL: BLOCK bb.1
 # CHECK-LABEL: BLOCK bb.2
 # CHECK: DIVERGENT: %{{[0-9]*}}: %{{[0-9]*}}:_(s32) = G_PHI %{{[0-9]*}}:_(s32), %bb.1, %{{[0-9]*}}:_(s32), %bb.0
 # CHECK: DIVERGENT: %{{[0-9]*}}: %{{[0-9]*}}:_(s1) = G_PHI %{{[0-9]*}}:_(s1), %bb.1, %{{[0-9]*}}:_(s1), %bb.0
-# CHECK: DIVERGENT: %{{[0-9]*}}: %{{[0-9]*}}:_(s1), %{{[0-9]*}}:_(s64) = G_INTRINSIC_CONVERGENT_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.if)
-# CHECK: DIVERGENT: %{{[0-9]*}}: %{{[0-9]*}}:_(s1), %{{[0-9]*}}:_(s64) = G_INTRINSIC_CONVERGENT_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.if)
+# CHECK: DIVERGENT: %{{[0-9]*}}: %{{[0-9]*}}:_(s1), %{{[0-9]*}}:_(s64) = G_INTRINSIC_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.if)
+# CHECK: DIVERGENT: %{{[0-9]*}}: %{{[0-9]*}}:_(s1), %{{[0-9]*}}:_(s64) = G_INTRINSIC_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.if)
 # CHECK: DIVERGENT: G_BRCOND %{{[0-9]*}}:_(s1), %bb.3
 # CHECK: DIVERGENT: G_BR %bb.4
 # CHECK-LABEL: BLOCK bb.3
@@ -44,7 +44,7 @@ body:             |
     %14:_(s32) = G_INTRINSIC intrinsic(@llvm.amdgcn.workitem.id.x)
     %16:_(s1) = G_ICMP intpred(slt), %14(s32), %15
     %18:_(s1) = G_XOR %16, %17
-    %19:_(s1), %20:_(s64) = G_INTRINSIC_CONVERGENT_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.if), %16(s1)
+    %19:_(s1), %20:_(s64) = G_INTRINSIC_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.if), %16(s1)
     G_BRCOND %19(s1), %bb.2
     G_BR %bb.3
 
@@ -60,8 +60,8 @@ body:             |
 
     %25:_(s32) = G_PHI %22(s32), %bb.2, %33(s32), %bb.1
     %26:_(s1) = G_PHI %24(s1), %bb.2, %18(s1), %bb.1
-    G_INTRINSIC_CONVERGENT_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), %20(s64)
-    %27:_(s1), %28:_(s64) = G_INTRINSIC_CONVERGENT_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.if), %26(s1)
+    G_INTRINSIC_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), %20(s64)
+    %27:_(s1), %28:_(s64) = G_INTRINSIC_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.if), %26(s1)
     G_BRCOND %27(s1), %bb.4
     G_BR %bb.5
 
@@ -72,7 +72,7 @@ body:             |
 
   bb.5:
     %31:_(s32) = G_PHI %25(s32), %bb.3, %29(s32), %bb.4
-    G_INTRINSIC_CONVERGENT_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), %28(s64)
+    G_INTRINSIC_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), %28(s64)
     G_STORE %31(s32), %32(p1) :: (volatile store (s32) into `ptr addrspace(1) undef`, addrspace 1)
     S_ENDPGM 0
 
diff --git a/llvm/test/Analysis/UniformityAnalysis/AMDGPU/MIR/temporal-divergence.mir b/llvm/test/Analysis/UniformityAnalysis/AMDGPU/MIR/temporal-divergence.mir
index 1e83151f692471..7bff87c09b3c9f 100644
--- a/llvm/test/Analysis/UniformityAnalysis/AMDGPU/MIR/temporal-divergence.mir
+++ b/llvm/test/Analysis/UniformityAnalysis/AMDGPU/MIR/temporal-divergence.mir
@@ -27,7 +27,7 @@ body:             |
 
     %11:_(s64) = G_PHI %12(s64), %bb.2, %15(s64), %bb.1
     %18:_(s1) = G_CONSTANT i1 false
-    %12:sreg_64_xexec(s64) = G_INTRINSIC_CONVERGENT intrinsic(@llvm.amdgcn.if.break), %18(s1), %11(s64)
+    %12:sreg_64_xexec(s64) = G_INTRINSIC intrinsic(@llvm.amdgcn.if.break), %18(s1), %11(s64)
   ; CHECK:   DIVERGENT: SI_LOOP
     SI_LOOP %12(s64), %bb.2, implicit-def $exec, implicit-def $scc, implicit $exec
     G_BR %bb.3
@@ -35,7 +35,7 @@ body:             |
   bb.3:
   ; CHECK:   DIVERGENT: %{{[0-9]+}}: %{{[0-9]+}}:_(s64) = G_PHI
     %14:_(s64) = G_PHI %12(s64), %bb.2
-    G_INTRINSIC_CONVERGENT_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), %14(s64)
+    G_INTRINSIC_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), %14(s64)
     S_ENDPGM 0
 
 ...
@@ -82,7 +82,7 @@ body:             |
     successors: %bb.5, %bb.4
 
     %15:_(s64) = G_PHI %24(s64), %bb.2, %16(s64), %bb.4
-    %16:sreg_64_xexec(s64) = G_INTRINSIC_CONVERGENT intrinsic(@llvm.amdgcn.if.break), %14(s1), %15(s64)
+    %16:sreg_64_xexec(s64) = G_INTRINSIC intrinsic(@llvm.amdgcn.if.break), %14(s1), %15(s64)
   ; CHECK:   DIVERGENT: SI_LOOP
     SI_LOOP %16(s64), %bb.4, implicit-def $exec, implicit-def $scc, implicit $exec
     G_BR %bb.5
@@ -90,7 +90,7 @@ body:             |
   bb.5:
   ; CHECK:   DIVERGENT: %{{[0-9]+}}: %{{[0-9]+}}:_(s64) = G_PHI
     %18:_(s64) = G_PHI %16(s64), %bb.4
-    G_INTRINSIC_CONVERGENT_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), %18(s64)
+    G_INTRINSIC_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), %18(s64)
     G_BR %bb.3
 
   bb.6:
@@ -140,7 +140,7 @@ body:             |
     successors: %bb.5, %bb.4
 
     %15:_(s64) = G_PHI %24(s64), %bb.2, %16(s64), %bb.4
-    %16:sreg_64_xexec(s64) = G_INTRINSIC_CONVERGENT intrinsic(@llvm.amdgcn.if.break), %14(s1), %15(s64)
+    %16:sreg_64_xexec(s64) = G_INTRINSIC intrinsic(@llvm.amdgcn.if.break), %14(s1), %15(s64)
   ; CHECK:   DIVERGENT: SI_LOOP
     SI_LOOP %16(s64), %bb.4, implicit-def $exec, implicit-def $scc, implicit $exec
     G_BR %bb.5
@@ -148,7 +148,7 @@ body:             |
   bb.5:
   ; CHECK:   DIVERGENT: %{{[0-9]+}}: %{{[0-9]+}}:_(s64) = G_PHI
     %18:_(s64) = G_PHI %16(s64), %bb.4
-    G_INTRINSIC_CONVERGENT_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), %18(s64)
+    G_INTRINSIC_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), %18(s64)
     G_BR %bb.3
 
   bb.6:
@@ -191,7 +191,7 @@ body:             |
 
     %15:_(s64) = G_PHI %25(s64), %bb.2, %16(s64), %bb.3
     %24:_(s1) = G_CONSTANT i1 false
-    %16:sreg_64_xexec(s64) = G_INTRINSIC_CONVERGENT intrinsic(@llvm.amdgcn.if.break), %24(s1), %15(s64)
+    %16:sreg_64_xexec(s64) = G_INTRINSIC intrinsic(@llvm.amdgcn.if.break), %24(s1), %15(s64)
   ; CHECK:   DIVERGENT: SI_LOOP
     SI_LOOP %16(s64), %bb.3, implicit-def $exec, implicit-def $scc, implicit $exec
     G_BR %bb.4
@@ -201,7 +201,7 @@ body:             |
     successors: %bb.5, %bb.2
 
     %18:_(s64) = G_PHI %16(s64), %bb.3
-    G_INTRINSIC_CONVERGENT_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), %18(s64)
+    G_INTRINSIC_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), %18(s64)
     G_BRCOND %13(s1), %bb.2
     G_BR %bb.5
 
@@ -241,7 +241,7 @@ body:             |
   bb.2:
     %15:_(s64) = G_PHI %16(s64), %bb.4, %19(s64), %bb.1
     %24:_(s1) = G_CONSTANT i1 true
-    %16:sreg_64_xexec(s64) = G_INTRINSIC_CONVERGENT intrinsic(@llvm.amdgcn.if.break), %24(s1), %15(s64)
+    %16:sreg_64_xexec(s64) = G_INTRINSIC intrinsic(@llvm.amdgcn.if.break), %24(s1), %15(s64)
 
   bb.3:
     successors: %bb.4, %bb.3
@@ -259,7 +259,7 @@ body:             |
   bb.5:
   ; CHECK:   DIVERGENT: %{{[0-9]+}}: %{{[0-9]+}}:_(s64) = G_PHI
     %18:_(s64) = G_PHI %16(s64), %bb.4
-    G_INTRINSIC_CONVERGENT_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), %18(s64)
+    G_INTRINSIC_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), %18(s64)
     S_ENDPGM 0
 
 ...
@@ -291,7 +291,7 @@ body:             |
 
     %10:_(s64) = G_PHI %11(s64), %bb.2, %19(s64), %bb.1
     %24:_(s1) = G_CONSTANT i1 false
-    %11:sreg_64_xexec(s64) = G_INTRINSIC_CONVERGENT intrinsic(@llvm.amdgcn.if.break), %24(s1), %10(s64)
+    %11:sreg_64_xexec(s64) = G_INTRINSIC intrinsic(@llvm.amdgcn.if.break), %24(s1), %10(s64)
   ; CHECK:   DIVERGENT: SI_LOOP
     SI_LOOP %11(s64), %bb.2, implicit-def $exec, implicit-def $scc, implicit $exec
     G_BR %bb.3
@@ -300,7 +300,7 @@ body:             |
   ; CHECK:   DIVERGENT: %{{[0-9]+}}: %{{[0-9]+}}:_(s64) = G_PHI
   ; CHECK-NOT:   DIVERGENT: %{{[0-9]+}}: %{{[0-9]+}}:_(s64) = G_PHI
     %13:_(s64) = G_PHI %11(s64), %bb.2
-    G_INTRINSIC_CONVERGENT_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), %13(s64)
+    G_INTRINSIC_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), %13(s64)
     %14:_(p4) = COPY %3(p4)
     %15:_(s64) = G_CONSTANT i64 40
     %16:_(p4) = G_PTR_ADD %14, %15(s64)
@@ -354,7 +354,7 @@ body:             |
 
     %15:_(s64) = G_PHI %23(s64), %bb.2, %16(s64), %bb.3
     %25:_(s1) = G_CONSTANT i1 false
-    %16:sreg_64_xexec(s64) = G_INTRINSIC_CONVERGENT intrinsic(@llvm.amdgcn.if.break), %25(s1), %15(s64)
+    %16:sreg_64_xexec(s64) = G_INTRINSIC intrinsic(@llvm.amdgcn.if.break), %25(s1), %15(s64)
   ; CHECK:   DIVERGENT: SI_LOOP
     SI_LOOP %16(s64), %bb.3, implicit-def $exec, implicit-def $scc, implicit $exec
     G_BR %bb.4
@@ -362,7 +362,7 @@ body:             |
   bb.4:
   ; CHECK:   DIVERGENT: %{{[0-9]+}}: %{{[0-9]+}}:_(s64) = G_PHI
     %18:_(s64) = G_PHI %16(s64), %bb.3
-    G_INTRINSIC_CONVERGENT_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), %18(s64)
+    G_INTRINSIC_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), %18(s64)
 
   bb.5:
 
diff --git a/llvm/test/Analysis/UniformityAnalysis/AMDGPU/MIR/uses-value-from-cycle.mir b/llvm/test/Analysis/UniformityAnalysis/AMDGPU/MIR/uses-value-from-cycle.mir
index d1f53aadc37d71..b7e0d5449d2e8b 100644
--- a/llvm/test/Analysis/UniformityAnalysis/AMDGPU/MIR/uses-value-from-cycle.mir
+++ b/llvm/test/Analysis/UniformityAnalysis/AMDGPU/MIR/uses-value-from-cycle.mir
@@ -43,18 +43,18 @@ body:             |
     ; CHECK: DIVERGENT: %{{[0-9]}}: %{{[0-9]}}:_(s32) = G_PHI
     ; CHECK: DIVERGENT: %{{[0-9]}}: %{{[0-9]}}:_(s32) = G_PHI
     ; CHECK: DIVERGENT: %{{[0-9]}}: %{{[0-9]}}:_(s32) = G_PHI
-    ; CHECK-NOT: DIVERGENT: %{{[0-9]}}: %{{[0-9]}}:sreg_32_xm0_xexec(s32) = G_INTRINSIC_CONVERGENT intrinsic(@llvm.amdgcn.if.break)
+    ; CHECK-NOT: DIVERGENT: %{{[0-9]}}: %{{[0-9]}}:sreg_32_xm0_xexec(s32) = G_INTRINSIC intrinsic(@llvm.amdgcn.if.break)
     %19:_(s32) = G_PHI %18(s32), %bb.7, %25(s32), %bb.4
     %20:_(s32) = G_PHI %6(s32), %bb.7, %25(s32), %bb.4
     %21:_(s1) = G_PHI %34(s1), %bb.7, %33(s1), %bb.4
-    G_INTRINSIC_CONVERGENT_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), %16(s32)
-    %22:sreg_32_xm0_xexec(s32) = G_INTRINSIC_CONVERGENT intrinsic(@llvm.amdgcn.if.break), %21(s1), %0(s32)
+    G_INTRINSIC_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), %16(s32)
+    %22:sreg_32_xm0_xexec(s32) = G_INTRINSIC intrinsic(@llvm.amdgcn.if.break), %21(s1), %0(s32)
     SI_LOOP %22(s32), %bb.2, implicit-def $exec, implicit-def $scc, implicit $exec
     G_BR %bb.6
   
   bb.6:
     %24:_(s32) = G_PHI %22(s32), %bb.5
-    G_INTRINSIC_CONVERGENT_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), %24(s32)
+    G_INTRINSIC_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), %24(s32)
     SI_RETURN
   
   bb.7:
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-divergent-i1-phis-no-lane-mask-merging.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-divergent-i1-phis-no-lane-mask-merging.mir
index 9b0bd2752b8231..6594d7f5042123 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-divergent-i1-phis-no-lane-mask-merging.mir
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-divergent-i1-phis-no-lane-mask-merging.mir
@@ -218,17 +218,17 @@ body: |
   ; GFX10-NEXT:   [[FCMP:%[0-9]+]]:_(s1) = G_FCMP floatpred(ogt), [[UITOFP]](s32), [[COPY]]
   ; GFX10-NEXT:   [[C3:%[0-9]+]]:_(s32) = G_CONSTANT i32 1
   ; GFX10-NEXT:   [[ADD:%[0-9]+]]:_(s32) = G_ADD [[PHI2]], [[C3]]
-  ; GFX10-NEXT:   [[INTRINSIC_CONVERGENT:%[0-9]+]]:sreg_32_xm0_xexec(s32) = G_INTRINSIC_CONVERGENT intrinsic(@llvm.amdgcn.if.break), [[FCMP]](s1), [[PHI1]](s32)
+  ; GFX10-NEXT:   [[INT:%[0-9]+]]:sreg_32_xm0_xexec(s32) = G_INTRINSIC intrinsic(@llvm.amdgcn.if.break), [[FCMP]](s1), [[PHI1]](s32)
   ; GFX10-NEXT:   [[S_ANDN2_B32_:%[0-9]+]]:sreg_32(s1) = S_ANDN2_B32 [[COPY3]](s1), $exec_lo, implicit-def $scc
   ; GFX10-NEXT:   [[S_AND_B32_:%[0-9]+]]:sreg_32(s1) = S_AND_B32 $exec_lo, [[COPY4]](s1), implicit-def $scc
   ; GFX10-NEXT:   [[S_OR_B32_:%[0-9]+]]:sreg_32(s1) = S_OR_B32 [[S_ANDN2_B32_]](s1), [[S_AND_B32_]](s1), implicit-def $scc
-  ; GFX10-NEXT:   SI_LOOP [[INTRINSIC_CONVERGENT]](s32), %bb.1, implicit-def $exec, implicit-def $scc, implicit $exec
+  ; GFX10-NEXT:   SI_LOOP [[INT]](s32), %bb.1, implicit-def $exec, implicit-def $scc, implicit $exec
   ; GFX10-NEXT:   G_BR %bb.2
   ; GFX10-NEXT: {{  $}}
   ; GFX10-NEXT: bb.2:
-  ; GFX10-NEXT:   [[PHI4:%[0-9]+]]:_(s32) = G_PHI [[INTRINSIC_CONVERGENT]](s32), %bb.1
+  ; GFX10-NEXT:   [[PHI4:%[0-9]+]]:_(s32) = G_PHI [[INT]](s32), %bb.1
   ; GFX10-NEXT:   [[COPY5:%[0-9]+]]:sreg_32(s1) = COPY [[S_OR_B32_]](s1)
-  ; GFX10-NEXT:   G_INTRINSIC_CONVERGENT_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), [[PHI4]](s32)
+  ; GFX10-NEXT:   G_INTRINSIC_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), [[PHI4]](s32)
   ; GFX10-NEXT:   [[C4:%[0-9]+]]:_(s32) = G_FCONSTANT float 0.000000e+00
   ; GFX10-NEXT:   [[C5:%[0-9]+]]:_(s32) = G_FCONSTANT float 1.000000e+00
   ; GFX10-NEXT:   [[SELECT:%[0-9]+]]:_(s32) = G_SELECT [[COPY5]](s1), [[C5]], [[C4]]
@@ -257,14 +257,14 @@ body: |
     %14:_(s1) = G_FCMP floatpred(ogt), %13(s32), %0
     %15:_(s32) = G_CONSTANT i32 1
     %9:_(s32) = G_ADD %8, %15
-    %7:sreg_32_xm0_xexec(s32) = G_INTRINSIC_CONVERGENT intrinsic(@llvm.amdgcn.if.break), %14(s1), %6(s32)
+    %7:sreg_32_xm0_xexec(s32) = G_INTRINSIC intrinsic(@llvm.amdgcn.if.break), %14(s1), %6(s32)
     SI_LOOP %7(s32), %bb.1, implicit-def $exec, implicit-def $scc, implicit $exec
     G_BR %bb.2
 
   bb.2:
     %16:_(s1) = G_PHI %11(s1), %bb.1
     %17:_(s32) = G_PHI %7(s32), %bb.1
-    G_INTRINSIC_CONVERGENT_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), %17(s32)
+    G_INTRINSIC_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), %17(s32)
     %18:_(s32) = G_FCONSTANT float 0.000000e+00
     %19:_(s32) = G_FCONSTANT float 1.000000e+00
     %20:_(s32) = G_SELECT %16(s1), %19, %18
@@ -348,18 +348,18 @@ body: |
   ; GFX10-NEXT:   [[FCMP1:%[0-9]+]]:_(s1) = G_FCMP floatpred(ogt), [[UITOFP]](s32), [[COPY]]
   ; GFX10-NEXT:   [[C9:%[0-9]+]]:_(s32) = G_CONSTANT i32 1
   ; GFX10-NEXT:   [[ADD:%[0-9]+]]:_(s32) = G_ADD [[PHI3]], [[C9]]
-  ; GFX10-NEXT:   [[INTRINSIC_CONVERGENT:%[0-9]+]]:sreg_32_xm0_xexec(s32) = G_INTRINSIC_CONVERGENT intrinsic(@llvm.amdgcn.if.break), [[FCMP1]](s1), [[PHI2]](s32)
+  ; GFX10-NEXT:   [[INT:%[0-9]+]]:sreg_32_xm0_xexec(s32) = G_INTRINSIC intrinsic(@llvm.amdgcn.if.break), [[FCMP1]](s1), [[PHI2]](s32)
   ; GFX10-NEXT:   [[COPY12:%[0-9]+]]:sreg_32(s1) = COPY [[XOR1]](s1)
   ; GFX10-NEXT:   [[S_ANDN2_B32_:%[0-9]+]]:sreg_32(s1) = S_ANDN2_B32 [[COPY9]](s1), $exec_lo, implicit-def $scc
   ; GFX10-NEXT:   [[S_AND_B32_:%[0-9]+]]:sreg_32(s1) = S_AND_B32 $exec_lo, [[COPY11]](s1), implicit-def $scc
   ; GFX10-NEXT:   [[S_OR_B32_:%[0-9]+]]:sreg_32(s1) = S_OR_B32 [[S_ANDN2_B32_]](s1), [[S_AND_B32_]](s1), implicit-def $scc
-  ; GFX10-NEXT:   SI_LOOP [[INTRINSIC_CONVERGENT]](s32), %bb.1, implicit-def $exec, implicit-def $scc, implicit $exec
+  ; GFX10-NEXT:   SI_LOOP [[INT]](s32), %bb.1, implicit-def $exec, implicit-def $scc, implicit $exec
   ; GFX10-NEXT:   G_BR %bb.6
   ; GFX10-NEXT: {{  $}}
   ; GFX10-NEXT: bb.6:
-  ; GFX10-NEXT:   [[PHI5:%[0-9]+]]:_(s32) = G_PHI [[INTRINSIC_CONVERGENT]](s32), %bb.5
+  ; GFX10-NEXT:   [[PHI5:%[0-9]+]]:_(s32) = G_PHI [[INT]](s32), %bb.5
   ; GFX10-NEXT:   [[COPY13:%[0-9]+]]:sreg_32(s1) = COPY [[S_OR_B32_]](s1)
-  ; GFX10-NEXT:   G_INTRINSIC_CONVERGENT_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), [[PHI5]](s32)
+  ; GFX10-NEXT:   G_INTRINSIC_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), [[PHI5]](s32)
   ; GFX10-NEXT:   [[C10:%[0-9]+]]:_(s32) = G_FCONSTANT float 0.000000e+00
   ; GFX10-NEXT:   [[C11:%[0-9]+]]:_(s32) = G_FCONSTANT float 1.000000e+00
   ; GFX10-NEXT:   [[SELECT:%[0-9]+]]:_(s32) = G_SELECT [[COPY13]](s1), [[C11]], [[C10]]
@@ -429,14 +429,14 @@ body: |
     %31:_(s1) = G_FCMP floatpred(ogt), %30(s32), %0
     %32:_(s32) = G_CONSTANT i32 1
     %17:_(s32) = G_ADD %16, %32
-    %15:sreg_32_xm0_xexec(s32) = G_INTRINSIC_CONVERGENT intrinsic(@llvm.amdgcn.if.break), %31(s1), %14(s32)
+    %15:sreg_32_xm0_xexec(s32) = G_INTRINSIC intrinsic(@llvm.amdgcn.if.break), %31(s1), %14(s32)
     SI_LOOP %15(s32), %bb.1, implicit-def $exec, implicit-def $scc, implicit $exec
     G_BR %bb.6
 
   bb.6:
     %33:_(s1) = G_PHI %19(s1), %bb.5
     %34:_(s32) = G_PHI %15(s32), %bb.5
-    G_INTRINSIC_CONVERGENT_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), %34(s32)
+    G_INTRINSIC_W_SIDE_EFFECTS intrinsic(@llvm.amdgcn.end.cf), %34(s32)
     %35:_(s32) = G_FCONSTANT float 0.000000e+00
     %36:_(s32) = G_FCONSTANT float 1.000000e+00
     %37:_(s32) = G_SELECT %33(s1), %36, %35
diff --git a/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-divergent-i1-used-outside-loop.mir b/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-divergent-i1-used-outside-loop.mir
index 206c0adb6c0c1f..5bbe3e48868998 100644
--- a/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-divergent-i1-used-outside-loop.mir
+++ b/llvm/test/CodeGen/AMDGPU/GlobalISel/divergence-divergent-i1-used-outside-loop.mir
@@ -45,20 +45,20 @@ body: |
   ; GFX10-NEXT:   [[FCMP1:%[0-9]+]]:_(s1) = G_FCMP floatpred(ogt), [[UITOFP]](s32), [[COPY]]
   ; GFX10-NEXT:   [[C3:%[0-9]+]]:_(s32) = G_CONSTANT i32 1
   ; GFX10-NEXT:   [[ADD:%[0-9]+]]:_(s32) = G_ADD [[PHI3]], [[C3]]
-  ; GFX10-NEXT:   [[INTRINSIC_CONVERGENT:%[0-9]+]]:sreg_32_xm0_xexec(s32) = G_INTRINSIC_CONVERGENT intrinsic(@llvm.amdgcn.if.break), [[FCMP1]](s1), [[PHI2]](s32)
+  ; GFX10-NEXT:   [[INT:%[0-9]+]]:sreg_32_xm0_xexec(s32) = G_INTRINSIC intrinsic(@llvm.amdg...
[truncated]

@@ -3143,25 +3143,37 @@ def int_amdgcn_cvt_sr_fp8_f32 : ClangBuiltin<"__builtin_amdgcn_cvt_sr_fp8_f32">,
// Special Intrinsics for backend internal use only. No frontend
// should emit calls to these.
// ===----------------------------------------------------------------------===//
//
// This is really a workaround to allow control flow lowering in the presence of
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment starts with no context

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

This is really a workaround to allow control flow lowering in the presence of
convergence control tokens. Control-flow intrinsics in LLVM IR are convergent
because they indirectly represent the wave CFG, i.e., sets of threads that are
"converged" or "execute in lock-step". But they exist during a small window in
the lowering process, inserted after the structurizer and then translated to
equivalent MIR pseudos. So rather than create convergence tokens for these
builtins, we simply mark them as not convergent.

The corresponding MIR pseudos are marked as having side effects, which is
sufficient to prevent optimizations without having to mark them as convergent.
@ssahasra ssahasra force-pushed the ssahasra/cf-no-convergent branch from 97b7ba9 to 32ac2c8 Compare April 29, 2024 06:48
@ssahasra ssahasra requested a review from alex-t May 2, 2024 05:24
@ssahasra
Copy link
Collaborator Author

ssahasra commented May 2, 2024

Bump! Added @alex-t because I am told he is working on changes to the control flow lowering.

@ssahasra ssahasra merged commit 8a65ee8 into llvm:main May 6, 2024
@ssahasra ssahasra deleted the ssahasra/cf-no-convergent branch May 6, 2024 08:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend:AMDGPU llvm:analysis Includes value tracking, cost tables and constant folding llvm:globalisel llvm:ir
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants