Skip to content

[AMDGPU] Add option to pre-allocate SGPR spill VGPRs #70626

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Nov 13, 2023

Conversation

perlfu
Copy link
Contributor

@perlfu perlfu commented Oct 30, 2023

SGPR spill VGPRs are WWM registers so allow them to be allocated by SIPreAllocateWWMRegs pass.
This intentionally prevents spilling of these VGPRs when enabled.

@llvmbot
Copy link
Member

llvmbot commented Oct 30, 2023

@llvm/pr-subscribers-backend-amdgpu

Author: Carl Ritson (perlfu)

Changes

SGPR spill VGPRs are WWM registers so allow them to be allocated by SIPreAllocateWWMRegs pass.
This intentionally prevents spilling of these VGPRs when enabled.


Patch is 27.80 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/70626.diff

2 Files Affected:

  • (modified) llvm/lib/Target/AMDGPU/SIPreAllocateWWMRegs.cpp (+11-1)
  • (modified) llvm/test/CodeGen/AMDGPU/stacksave_stackrestore.ll (+388)
diff --git a/llvm/lib/Target/AMDGPU/SIPreAllocateWWMRegs.cpp b/llvm/lib/Target/AMDGPU/SIPreAllocateWWMRegs.cpp
index c2ddfd7881ab760..dfe5a6db2db6b92 100644
--- a/llvm/lib/Target/AMDGPU/SIPreAllocateWWMRegs.cpp
+++ b/llvm/lib/Target/AMDGPU/SIPreAllocateWWMRegs.cpp
@@ -28,6 +28,10 @@ using namespace llvm;
 
 #define DEBUG_TYPE "si-pre-allocate-wwm-regs"
 
+static cl::opt<bool>
+    EnablePreallocateSGPRSpillVGPRs("amdgpu-prealloc-sgpr-spill-vgprs",
+                                    cl::init(false), cl::Hidden);
+
 namespace {
 
 class SIPreAllocateWWMRegs : public MachineFunctionPass {
@@ -201,6 +205,10 @@ bool SIPreAllocateWWMRegs::runOnMachineFunction(MachineFunction &MF) {
 
   RegClassInfo.runOnMachineFunction(MF);
 
+  bool PreallocateSGPRSpillVGPRs =
+      EnablePreallocateSGPRSpillVGPRs ||
+      MF.getFunction().hasFnAttribute("amdgpu-prealloc-sgpr-spill-vgprs");
+
   bool RegsAssigned = false;
 
   // We use a reverse post-order traversal of the control-flow graph to
@@ -214,7 +222,9 @@ bool SIPreAllocateWWMRegs::runOnMachineFunction(MachineFunction &MF) {
     bool InWWM = false;
     for (MachineInstr &MI : *MBB) {
       if (MI.getOpcode() == AMDGPU::V_SET_INACTIVE_B32 ||
-          MI.getOpcode() == AMDGPU::V_SET_INACTIVE_B64)
+          MI.getOpcode() == AMDGPU::V_SET_INACTIVE_B64 ||
+          (PreallocateSGPRSpillVGPRs &&
+           MI.getOpcode() == AMDGPU::SI_SPILL_S32_TO_VGPR))
         RegsAssigned |= processDef(MI.getOperand(0));
 
       if (MI.getOpcode() == AMDGPU::ENTER_STRICT_WWM ||
diff --git a/llvm/test/CodeGen/AMDGPU/stacksave_stackrestore.ll b/llvm/test/CodeGen/AMDGPU/stacksave_stackrestore.ll
index 36b5e2a00f6d4d3..78871385f8ffcb2 100644
--- a/llvm/test/CodeGen/AMDGPU/stacksave_stackrestore.ll
+++ b/llvm/test/CodeGen/AMDGPU/stacksave_stackrestore.ll
@@ -4,6 +4,7 @@
 
 ; RUN: llc -O0 -mtriple=amdgcn-amd-amdpal -mcpu=gfx1030 < %s | FileCheck -check-prefixes=GCN,WAVE32,WAVE32-O0 %s
 ; RUN: llc -O0 -mtriple=amdgcn-amd-amdpal -mcpu=gfx1030 -mattr=+wavefrontsize64 < %s | FileCheck -check-prefixes=GCN,WAVE64,WAVE64-O0 %s
+; RUN: llc -O0 -mtriple=amdgcn-amd-amdpal -mcpu=gfx1030 -amdgpu-prealloc-sgpr-spill-vgprs=1 < %s | FileCheck -check-prefixes=GCN,WAVE32,WAVE32-WWM-PREALLOC %s
 
 declare ptr addrspace(5) @llvm.stacksave.p5()
 declare void @llvm.stackrestore.p5(ptr addrspace(5))
@@ -54,6 +55,16 @@ define void @func_store_stacksave() {
 ; WAVE64-O0-NEXT:    ; use s4
 ; WAVE64-O0-NEXT:    ;;#ASMEND
 ; WAVE64-O0-NEXT:    s_setpc_b64 s[30:31]
+;
+; WAVE32-WWM-PREALLOC-LABEL: func_store_stacksave:
+; WAVE32-WWM-PREALLOC:       ; %bb.0:
+; WAVE32-WWM-PREALLOC-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s4, s32
+; WAVE32-WWM-PREALLOC-NEXT:    s_lshr_b32 s4, s4, 5
+; WAVE32-WWM-PREALLOC-NEXT:    ;;#ASMSTART
+; WAVE32-WWM-PREALLOC-NEXT:    ; use s4
+; WAVE32-WWM-PREALLOC-NEXT:    ;;#ASMEND
+; WAVE32-WWM-PREALLOC-NEXT:    s_setpc_b64 s[30:31]
   %stacksave = call ptr addrspace(5) @llvm.stacksave.p5()
   call void asm sideeffect "; use $0", "s"(ptr addrspace(5) %stacksave)
   ret void
@@ -93,6 +104,15 @@ define amdgpu_kernel void @kernel_store_stacksave() {
 ; WAVE64-O0-NEXT:    ; use s0
 ; WAVE64-O0-NEXT:    ;;#ASMEND
 ; WAVE64-O0-NEXT:    s_endpgm
+;
+; WAVE32-WWM-PREALLOC-LABEL: kernel_store_stacksave:
+; WAVE32-WWM-PREALLOC:       ; %bb.0:
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s0, s32
+; WAVE32-WWM-PREALLOC-NEXT:    s_lshr_b32 s0, s0, 5
+; WAVE32-WWM-PREALLOC-NEXT:    ;;#ASMSTART
+; WAVE32-WWM-PREALLOC-NEXT:    ; use s0
+; WAVE32-WWM-PREALLOC-NEXT:    ;;#ASMEND
+; WAVE32-WWM-PREALLOC-NEXT:    s_endpgm
   %stacksave = call ptr addrspace(5) @llvm.stacksave.p5()
   call void asm sideeffect "; use $0", "s"(ptr addrspace(5) %stacksave)
   ret void
@@ -158,6 +178,22 @@ define amdgpu_kernel void @kernel_store_stacksave_nocall() {
 ; WAVE64-O0-NEXT:    v_mov_b32_e32 v1, s0
 ; WAVE64-O0-NEXT:    buffer_store_dword v0, v1, s[12:15], 0 offen
 ; WAVE64-O0-NEXT:    s_endpgm
+;
+; WAVE32-WWM-PREALLOC-LABEL: kernel_store_stacksave_nocall:
+; WAVE32-WWM-PREALLOC:       ; %bb.0:
+; WAVE32-WWM-PREALLOC-NEXT:    s_getpc_b64 s[12:13]
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s12, s0
+; WAVE32-WWM-PREALLOC-NEXT:    s_load_dwordx4 s[12:15], s[12:13], 0x0
+; WAVE32-WWM-PREALLOC-NEXT:    s_waitcnt lgkmcnt(0)
+; WAVE32-WWM-PREALLOC-NEXT:    s_bitset0_b32 s15, 21
+; WAVE32-WWM-PREALLOC-NEXT:    s_add_u32 s12, s12, s11
+; WAVE32-WWM-PREALLOC-NEXT:    s_addc_u32 s13, s13, 0
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s0, s32
+; WAVE32-WWM-PREALLOC-NEXT:    s_lshr_b32 s0, s0, 5
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v0, 0
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v1, s0
+; WAVE32-WWM-PREALLOC-NEXT:    buffer_store_dword v0, v1, s[12:15], 0 offen
+; WAVE32-WWM-PREALLOC-NEXT:    s_endpgm
   %stacksave = call ptr addrspace(5) @llvm.stacksave.p5()
   store i32 0, ptr addrspace(5) %stacksave
   ret void
@@ -281,6 +317,36 @@ define void @func_stacksave_nonentry_block(i1 %cond) {
 ; WAVE64-O0-NEXT:    s_mov_b64 exec, s[4:5]
 ; WAVE64-O0-NEXT:    s_waitcnt vmcnt(0)
 ; WAVE64-O0-NEXT:    s_setpc_b64 s[30:31]
+;
+; WAVE32-WWM-PREALLOC-LABEL: func_stacksave_nonentry_block:
+; WAVE32-WWM-PREALLOC:       ; %bb.0: ; %bb0
+; WAVE32-WWM-PREALLOC-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; WAVE32-WWM-PREALLOC-NEXT:    s_xor_saveexec_b32 s4, -1
+; WAVE32-WWM-PREALLOC-NEXT:    buffer_store_dword v1, off, s[0:3], s32 ; 4-byte Folded Spill
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 exec_lo, s4
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $vgpr1 : SGPR spill to VGPR lane
+; WAVE32-WWM-PREALLOC-NEXT:    v_and_b32_e64 v0, 1, v0
+; WAVE32-WWM-PREALLOC-NEXT:    v_cmp_eq_u32_e64 s5, v0, 1
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s4, exec_lo
+; WAVE32-WWM-PREALLOC-NEXT:    v_writelane_b32 v1, s4, 0
+; WAVE32-WWM-PREALLOC-NEXT:    s_and_b32 s4, s4, s5
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 exec_lo, s4
+; WAVE32-WWM-PREALLOC-NEXT:    s_cbranch_execz .LBB4_2
+; WAVE32-WWM-PREALLOC-NEXT:  ; %bb.1: ; %bb1
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s4, s32
+; WAVE32-WWM-PREALLOC-NEXT:    s_lshr_b32 s4, s4, 5
+; WAVE32-WWM-PREALLOC-NEXT:    ;;#ASMSTART
+; WAVE32-WWM-PREALLOC-NEXT:    ; use s4
+; WAVE32-WWM-PREALLOC-NEXT:    ;;#ASMEND
+; WAVE32-WWM-PREALLOC-NEXT:  .LBB4_2: ; %bb2
+; WAVE32-WWM-PREALLOC-NEXT:    v_readlane_b32 s4, v1, 0
+; WAVE32-WWM-PREALLOC-NEXT:    s_or_b32 exec_lo, exec_lo, s4
+; WAVE32-WWM-PREALLOC-NEXT:    ; kill: killed $vgpr1
+; WAVE32-WWM-PREALLOC-NEXT:    s_xor_saveexec_b32 s4, -1
+; WAVE32-WWM-PREALLOC-NEXT:    buffer_load_dword v1, off, s[0:3], s32 ; 4-byte Folded Reload
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 exec_lo, s4
+; WAVE32-WWM-PREALLOC-NEXT:    s_waitcnt vmcnt(0)
+; WAVE32-WWM-PREALLOC-NEXT:    s_setpc_b64 s[30:31]
 bb0:
   br i1 %cond, label %bb1, label %bb2
 
@@ -321,6 +387,14 @@ define void @func_stackrestore_poison() {
 ; WAVE64-O0-NEXT:    s_lshl_b32 s4, s4, 6
 ; WAVE64-O0-NEXT:    s_mov_b32 s32, s4
 ; WAVE64-O0-NEXT:    s_setpc_b64 s[30:31]
+;
+; WAVE32-WWM-PREALLOC-LABEL: func_stackrestore_poison:
+; WAVE32-WWM-PREALLOC:       ; %bb.0:
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr4
+; WAVE32-WWM-PREALLOC-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; WAVE32-WWM-PREALLOC-NEXT:    s_lshl_b32 s4, s4, 5
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s32, s4
+; WAVE32-WWM-PREALLOC-NEXT:    s_setpc_b64 s[30:31]
   call void @llvm.stackrestore.p5(ptr addrspace(5) poison)
   ret void
 }
@@ -353,6 +427,14 @@ define void @func_stackrestore_null() {
 ; WAVE64-O0-NEXT:    s_lshl_b32 s4, s4, 6
 ; WAVE64-O0-NEXT:    s_mov_b32 s32, s4
 ; WAVE64-O0-NEXT:    s_setpc_b64 s[30:31]
+;
+; WAVE32-WWM-PREALLOC-LABEL: func_stackrestore_null:
+; WAVE32-WWM-PREALLOC:       ; %bb.0:
+; WAVE32-WWM-PREALLOC-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s4, 0
+; WAVE32-WWM-PREALLOC-NEXT:    s_lshl_b32 s4, s4, 5
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s32, s4
+; WAVE32-WWM-PREALLOC-NEXT:    s_setpc_b64 s[30:31]
   call void @llvm.stackrestore.p5(ptr addrspace(5) null)
   ret void
 }
@@ -385,6 +467,14 @@ define void @func_stackrestore_neg1() {
 ; WAVE64-O0-NEXT:    s_lshl_b32 s4, s4, 6
 ; WAVE64-O0-NEXT:    s_mov_b32 s32, s4
 ; WAVE64-O0-NEXT:    s_setpc_b64 s[30:31]
+;
+; WAVE32-WWM-PREALLOC-LABEL: func_stackrestore_neg1:
+; WAVE32-WWM-PREALLOC:       ; %bb.0:
+; WAVE32-WWM-PREALLOC-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s4, -1
+; WAVE32-WWM-PREALLOC-NEXT:    s_lshl_b32 s4, s4, 5
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s32, s4
+; WAVE32-WWM-PREALLOC-NEXT:    s_setpc_b64 s[30:31]
   call void @llvm.stackrestore.p5(ptr addrspace(5) inttoptr (i32 -1 to ptr addrspace(5)))
   ret void
 }
@@ -417,6 +507,14 @@ define void @func_stackrestore_42() {
 ; WAVE64-O0-NEXT:    s_lshl_b32 s4, s4, 6
 ; WAVE64-O0-NEXT:    s_mov_b32 s32, s4
 ; WAVE64-O0-NEXT:    s_setpc_b64 s[30:31]
+;
+; WAVE32-WWM-PREALLOC-LABEL: func_stackrestore_42:
+; WAVE32-WWM-PREALLOC:       ; %bb.0:
+; WAVE32-WWM-PREALLOC-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s4, 42
+; WAVE32-WWM-PREALLOC-NEXT:    s_lshl_b32 s4, s4, 5
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s32, s4
+; WAVE32-WWM-PREALLOC-NEXT:    s_setpc_b64 s[30:31]
   call void @llvm.stackrestore.p5(ptr addrspace(5) inttoptr (i32 42 to ptr addrspace(5)))
   ret void
 }
@@ -445,6 +543,13 @@ define void @func_stacksave_stackrestore() {
 ; WAVE64-O0-NEXT:    s_mov_b32 s4, s32
 ; WAVE64-O0-NEXT:    s_mov_b32 s32, s4
 ; WAVE64-O0-NEXT:    s_setpc_b64 s[30:31]
+;
+; WAVE32-WWM-PREALLOC-LABEL: func_stacksave_stackrestore:
+; WAVE32-WWM-PREALLOC:       ; %bb.0:
+; WAVE32-WWM-PREALLOC-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s4, s32
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s32, s4
+; WAVE32-WWM-PREALLOC-NEXT:    s_setpc_b64 s[30:31]
   %stacksave = call ptr addrspace(5) @llvm.stacksave.p5()
   call void @llvm.stackrestore.p5(ptr addrspace(5) %stacksave)
   ret void
@@ -490,6 +595,17 @@ define void @func_stacksave_stackrestore_use() {
 ; WAVE64-O0-NEXT:    ;;#ASMEND
 ; WAVE64-O0-NEXT:    s_mov_b32 s32, s4
 ; WAVE64-O0-NEXT:    s_setpc_b64 s[30:31]
+;
+; WAVE32-WWM-PREALLOC-LABEL: func_stacksave_stackrestore_use:
+; WAVE32-WWM-PREALLOC:       ; %bb.0:
+; WAVE32-WWM-PREALLOC-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s4, s32
+; WAVE32-WWM-PREALLOC-NEXT:    s_lshr_b32 s5, s4, 5
+; WAVE32-WWM-PREALLOC-NEXT:    ;;#ASMSTART
+; WAVE32-WWM-PREALLOC-NEXT:    ; use s5
+; WAVE32-WWM-PREALLOC-NEXT:    ;;#ASMEND
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s32, s4
+; WAVE32-WWM-PREALLOC-NEXT:    s_setpc_b64 s[30:31]
   %stacksave = call ptr addrspace(5) @llvm.stacksave.p5()
   call void asm sideeffect "; use $0", "s"(ptr addrspace(5) %stacksave)
   call void @llvm.stackrestore.p5(ptr addrspace(5) %stacksave)
@@ -532,6 +648,16 @@ define amdgpu_kernel void @kernel_stacksave_stackrestore_use() {
 ; WAVE64-O0-NEXT:    ;;#ASMEND
 ; WAVE64-O0-NEXT:    s_mov_b32 s32, s0
 ; WAVE64-O0-NEXT:    s_endpgm
+;
+; WAVE32-WWM-PREALLOC-LABEL: kernel_stacksave_stackrestore_use:
+; WAVE32-WWM-PREALLOC:       ; %bb.0:
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s0, s32
+; WAVE32-WWM-PREALLOC-NEXT:    s_lshr_b32 s1, s0, 5
+; WAVE32-WWM-PREALLOC-NEXT:    ;;#ASMSTART
+; WAVE32-WWM-PREALLOC-NEXT:    ; use s1
+; WAVE32-WWM-PREALLOC-NEXT:    ;;#ASMEND
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s32, s0
+; WAVE32-WWM-PREALLOC-NEXT:    s_endpgm
   %stacksave = call ptr addrspace(5) @llvm.stacksave.p5()
   call void asm sideeffect "; use $0", "s"(ptr addrspace(5) %stacksave)
   call void @llvm.stackrestore.p5(ptr addrspace(5) %stacksave)
@@ -578,6 +704,17 @@ define void @func_stacksave_stackrestore_voffset(i32 %offset) {
 ; WAVE64-O0-NEXT:    s_lshl_b32 s4, s4, 6
 ; WAVE64-O0-NEXT:    s_mov_b32 s32, s4
 ; WAVE64-O0-NEXT:    s_setpc_b64 s[30:31]
+;
+; WAVE32-WWM-PREALLOC-LABEL: func_stacksave_stackrestore_voffset:
+; WAVE32-WWM-PREALLOC:       ; %bb.0:
+; WAVE32-WWM-PREALLOC-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s4, s32
+; WAVE32-WWM-PREALLOC-NEXT:    s_lshr_b32 s4, s4, 5
+; WAVE32-WWM-PREALLOC-NEXT:    v_add_nc_u32_e64 v0, s4, v0
+; WAVE32-WWM-PREALLOC-NEXT:    v_readfirstlane_b32 s4, v0
+; WAVE32-WWM-PREALLOC-NEXT:    s_lshl_b32 s4, s4, 5
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s32, s4
+; WAVE32-WWM-PREALLOC-NEXT:    s_setpc_b64 s[30:31]
   %stacksave = call ptr addrspace(5) @llvm.stacksave.p5()
   %gep = getelementptr i8, ptr addrspace(5) %stacksave, i32 %offset
   call void @llvm.stackrestore.p5(ptr addrspace(5) %gep)
@@ -614,6 +751,14 @@ define void @func_stacksave_vgpr(ptr addrspace(5) %stack) {
 ; WAVE64-O0-NEXT:    s_lshl_b32 s4, s4, 6
 ; WAVE64-O0-NEXT:    s_mov_b32 s32, s4
 ; WAVE64-O0-NEXT:    s_setpc_b64 s[30:31]
+;
+; WAVE32-WWM-PREALLOC-LABEL: func_stacksave_vgpr:
+; WAVE32-WWM-PREALLOC:       ; %bb.0:
+; WAVE32-WWM-PREALLOC-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; WAVE32-WWM-PREALLOC-NEXT:    v_readfirstlane_b32 s4, v0
+; WAVE32-WWM-PREALLOC-NEXT:    s_lshl_b32 s4, s4, 5
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s32, s4
+; WAVE32-WWM-PREALLOC-NEXT:    s_setpc_b64 s[30:31]
   call void @llvm.stackrestore.p5(ptr addrspace(5) %stack)
   ret void
 }
@@ -644,6 +789,13 @@ define amdgpu_gfx void @func_stacksave_sgpr(ptr addrspace(5) inreg %stack) {
 ; WAVE64-O0-NEXT:    s_lshl_b32 s34, s4, 6
 ; WAVE64-O0-NEXT:    s_mov_b32 s32, s34
 ; WAVE64-O0-NEXT:    s_setpc_b64 s[30:31]
+;
+; WAVE32-WWM-PREALLOC-LABEL: func_stacksave_sgpr:
+; WAVE32-WWM-PREALLOC:       ; %bb.0:
+; WAVE32-WWM-PREALLOC-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
+; WAVE32-WWM-PREALLOC-NEXT:    s_lshl_b32 s34, s4, 5
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s32, s34
+; WAVE32-WWM-PREALLOC-NEXT:    s_setpc_b64 s[30:31]
   call void @llvm.stackrestore.p5(ptr addrspace(5) %stack)
   ret void
 }
@@ -690,6 +842,18 @@ define amdgpu_kernel void @kernel_stacksave_sgpr(ptr addrspace(5) %stack) {
 ; WAVE64-O0-NEXT:    s_lshl_b32 s0, s0, 6
 ; WAVE64-O0-NEXT:    s_mov_b32 s32, s0
 ; WAVE64-O0-NEXT:    s_endpgm
+;
+; WAVE32-WWM-PREALLOC-LABEL: kernel_stacksave_sgpr:
+; WAVE32-WWM-PREALLOC:       ; %bb.0:
+; WAVE32-WWM-PREALLOC-NEXT:    s_load_dword s0, s[4:5], 0x0
+; WAVE32-WWM-PREALLOC-NEXT:    s_waitcnt lgkmcnt(0)
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s1, s0
+; WAVE32-WWM-PREALLOC-NEXT:    ;;#ASMSTART
+; WAVE32-WWM-PREALLOC-NEXT:    ; use s1
+; WAVE32-WWM-PREALLOC-NEXT:    ;;#ASMEND
+; WAVE32-WWM-PREALLOC-NEXT:    s_lshl_b32 s0, s0, 5
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s32, s0
+; WAVE32-WWM-PREALLOC-NEXT:    s_endpgm
   call void asm sideeffect "; use $0", "s"(ptr addrspace(5) %stack)
   call void @llvm.stackrestore.p5(ptr addrspace(5) %stack)
   ret void
@@ -985,6 +1149,118 @@ define amdgpu_kernel void @kernel_stacksave_stackrestore_call_with_stack_objects
 ; WAVE64-O0-NEXT:    s_mov_b32 s32, s0
 ; WAVE64-O0-NEXT:    ; kill: killed $vgpr0
 ; WAVE64-O0-NEXT:    s_endpgm
+;
+; WAVE32-WWM-PREALLOC-LABEL: kernel_stacksave_stackrestore_call_with_stack_objects:
+; WAVE32-WWM-PREALLOC:       ; %bb.0:
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s32, 0x1200
+; WAVE32-WWM-PREALLOC-NEXT:    s_getpc_b64 s[20:21]
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s20, s0
+; WAVE32-WWM-PREALLOC-NEXT:    s_load_dwordx4 s[20:23], s[20:21], 0x0
+; WAVE32-WWM-PREALLOC-NEXT:    s_waitcnt lgkmcnt(0)
+; WAVE32-WWM-PREALLOC-NEXT:    s_bitset0_b32 s23, 21
+; WAVE32-WWM-PREALLOC-NEXT:    s_add_u32 s20, s20, s11
+; WAVE32-WWM-PREALLOC-NEXT:    s_addc_u32 s21, s21, 0
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $vgpr32 : SGPR spill to VGPR lane
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s14, s10
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s13, s9
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s12, s8
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b64 s[10:11], s[6:7]
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b64 s[8:9], s[4:5]
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b64 s[6:7], s[2:3]
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b64 s[4:5], s[0:1]
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s0, s32
+; WAVE32-WWM-PREALLOC-NEXT:    v_writelane_b32 v32, s0, 0
+; WAVE32-WWM-PREALLOC-NEXT:    s_lshr_b32 s0, s0, 5
+; WAVE32-WWM-PREALLOC-NEXT:    v_writelane_b32 v32, s0, 1
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v3, 42
+; WAVE32-WWM-PREALLOC-NEXT:    buffer_store_dword v3, off, s[20:23], 0 offset:4
+; WAVE32-WWM-PREALLOC-NEXT:    s_waitcnt_vscnt null, 0x0
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b64 s[0:1], s[20:21]
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b64 s[2:3], s[22:23]
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s15, s32
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v3, 17
+; WAVE32-WWM-PREALLOC-NEXT:    buffer_store_dword v3, off, s[20:23], s15 offset:4
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s15, stack_passed_argument@abs32@hi
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s16, stack_passed_argument@abs32@lo
+; WAVE32-WWM-PREALLOC-NEXT:    ; kill: def $sgpr16 killed $sgpr16 def $sgpr16_sgpr17
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s17, s15
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s15, 20
+; WAVE32-WWM-PREALLOC-NEXT:    v_lshlrev_b32_e64 v2, s15, v2
+; WAVE32-WWM-PREALLOC-NEXT:    s_mov_b32 s15, 10
+; WAVE32-WWM-PREALLOC-NEXT:    v_lshlrev_b32_e64 v1, s15, v1
+; WAVE32-WWM-PREALLOC-NEXT:    v_or3_b32 v31, v0, v1, v2
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr15
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v0, s18
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v1, s18
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v2, s18
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v3, s18
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v4, s18
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v5, s18
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v6, s18
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v7, s18
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v8, s18
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v9, s18
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v10, s18
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v11, s18
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v12, s18
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v13, s18
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v14, s18
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v15, s18
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v16, s18
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v17, s18
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v18, s18
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v19, s18
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v20, s18
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v21, s18
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v22, s18
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v23, s18
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v24, s18
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; WAVE32-WWM-PREALLOC-NEXT:    v_mov_b32_e32 v25, s18
+; WAVE32-WWM-PREALLOC-NEXT:    ; implicit-def: $sgpr18
+; W...
[truncated]

@perlfu
Copy link
Contributor Author

perlfu commented Oct 30, 2023

This is based on #70618 - but I have not included those changes in this PR for clarify.
Note: this is not the only way to a achieve this, we could modify the SGPR spill pass to force phys allocation with a similar amount of code diff.

@@ -201,6 +205,10 @@ bool SIPreAllocateWWMRegs::runOnMachineFunction(MachineFunction &MF) {

RegClassInfo.runOnMachineFunction(MF);

bool PreallocateSGPRSpillVGPRs =
EnablePreallocateSGPRSpillVGPRs ||
MF.getFunction().hasFnAttribute("amdgpu-prealloc-sgpr-spill-vgprs");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't see why we would want this controllable by an attribute for migration purposes a flag is sufficient.

@cdevadas is also working on getting the proper allocator to handle the allocation of WWM registers

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For shaders, I hope the SGPR spills are very limited and they won't require more VGPRs causing the 'ran out of allocatable registers' error.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't see why we would want this controllable by an attribute for migration purposes a flag is sufficient.

For graphics we use the LLVM backed as a shared library so passing options into it via a command line argument (even if it's just for migration) is a terrible interface.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As @jayfoad pointed out we typically use attributes as we can start embedding this in IR regardless of backend version.

I have spoken with @cdevadas about more complete allocator handling of WWM registers, and will support migration to it when from the graphics side when it is ready.

In the immediate term we need to work around corruption that split-spilling of SGPR spill VGPRs is causing (in complex divergent control flow) and can take the hit of pre-allocating the VGPRs required for all our current workloads.

SGPR spill VGPRs are WWM registers so allow them to be allocated
by SIPreAllocateWWMRegs pass.
This intentionally prevents spilling of these VGPRs when enabled.
@perlfu perlfu force-pushed the pre-allocate-sgpr-spill-vgprs branch from 489fd54 to 29a24d9 Compare November 8, 2023 06:57
@perlfu
Copy link
Contributor Author

perlfu commented Nov 8, 2023

Rebased after landing WWM pre-allocation pass move.

@ruiling
Copy link
Contributor

ruiling commented Nov 8, 2023

I think it is reasonable to let SIPreAllocateWWMRegs allocate wwm register given that wwm register allocation by RA is pretty broken. Maybe it is better to make this default behavior. The trick we have done in SILowerSGPRSpills::extendWWMVirtRegLiveness() no longer helps correctness when wwm register is either spilled or its live-range was split. So, can we teach RA to neither spill virtual WWM registers nor split its live-range?

@cdevadas
Copy link
Collaborator

cdevadas commented Nov 9, 2023

SILowerSGPRSpills::extendWWMVirtRegLiveness() no longer helps correctness when wwm register is either spilled or its live-range was split. So, can we teach RA to neither spill virtual WWM registers nor split its live-range?

That might not help. I observed for some HPCs and instrumented (asan enabled) code, the number of SGPR spills is quite large eventually requiring more virtual VGPRs while lowering them in SILowerSGPRSpills. These virtual VGPRs are of large live-ranges and often get allocated only after live-range split as their original live-ranges somehow ended up with RegMask interferences with PhysRegs (with the default 64 MaxNumVGPRs available during regalloc). Allocation happens only after splitting them into smaller live-ranges. So, I believe we can't avoid liverange split and spill for wwm-allocation.

Copy link
Contributor

@ruiling ruiling left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may need to check within processDef() whether a virtual register was already pushed into RegsToRewrite so we don't need to process one register several times as sgpr spill may have lots of re-definition. Other parts LGTM.

@ruiling
Copy link
Contributor

ruiling commented Nov 10, 2023

You may need to check within processDef() whether a virtual register was already pushed into RegsToRewrite so we don't need to process one register several times as sgpr spill may have lots of re-definition. Other parts LGTM.

A second look, VRM->hasPhys(Reg) should already do that?

Copy link
Contributor

@ruiling ruiling left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be good to go if no further concern from others.

Copy link
Collaborator

@cdevadas cdevadas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@perlfu perlfu merged commit edc38a6 into llvm:main Nov 13, 2023
zahiraam pushed a commit to zahiraam/llvm-project that referenced this pull request Nov 20, 2023
SGPR spill VGPRs are WWM registers so allow them to be allocated by
SIPreAllocateWWMRegs pass.
This intentionally prevents spilling of these VGPRs when enabled.
Guzhu-AMD pushed a commit to GPUOpen-Drivers/llvm-project that referenced this pull request Dec 7, 2023
Guzhu-AMD pushed a commit to GPUOpen-Drivers/llvm-project that referenced this pull request Dec 7, 2023
Local branch amd-gfx 1962fcd Revert "[AMDGPU] Add option to pre-allocate SGPR spill VGPRs (llvm#70626)"
Remote branch main a51196e [llvm-reduce] Remove unreachable branch (NFC)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants