Skip to content

AMDGPU: Fold copy of scalar add of frame index #115058

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

arsenm
Copy link
Contributor

@arsenm arsenm commented Nov 5, 2024

This is a pre-optimization to avoid a regression in a future
commit. Currently we almost always emit frame index with
a v_mov_b32 and use vector adds for the pointer operations. We
need to consider the users of the frame index (or rather, the
transitive users of derived pointer operations) to know whether
the value will be used in a vector or scalar context. This saves
an sgpr->vgpr copy.

This optimization could be more general for any opcode that's
trivially convertible from a scalar to vector form (although this
is a workaround for a proper regbankselect).

This is a pre-optimization to avoid a regression in a future
commit. Currently we almost always emit frame index with
a v_mov_b32 and use vector adds for the pointer operations. We
need to consider the users of the frame index (or rather, the
transitive users of derived pointer operations) to know whether
the value will be used in a vector or scalar context. This saves
an sgpr->vgpr copy.

This optimization could be more general for any opcode that's
trivially convertible from a scalar to vector form (although this
is a workaround for a proper regbankselect).
Copy link
Contributor Author

arsenm commented Nov 5, 2024

@llvmbot
Copy link
Member

llvmbot commented Nov 5, 2024

@llvm/pr-subscribers-backend-amdgpu

Author: Matt Arsenault (arsenm)

Changes

This is a pre-optimization to avoid a regression in a future
commit. Currently we almost always emit frame index with
a v_mov_b32 and use vector adds for the pointer operations. We
need to consider the users of the frame index (or rather, the
transitive users of derived pointer operations) to know whether
the value will be used in a vector or scalar context. This saves
an sgpr->vgpr copy.

This optimization could be more general for any opcode that's
trivially convertible from a scalar to vector form (although this
is a workaround for a proper regbankselect).


Patch is 20.96 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/115058.diff

2 Files Affected:

  • (modified) llvm/lib/Target/AMDGPU/SIFoldOperands.cpp (+74-2)
  • (added) llvm/test/CodeGen/AMDGPU/fold-operands-s-add-copy-to-vgpr.mir (+394)
diff --git a/llvm/lib/Target/AMDGPU/SIFoldOperands.cpp b/llvm/lib/Target/AMDGPU/SIFoldOperands.cpp
index f0c7837e0bb75a..28bcbd58dc0376 100644
--- a/llvm/lib/Target/AMDGPU/SIFoldOperands.cpp
+++ b/llvm/lib/Target/AMDGPU/SIFoldOperands.cpp
@@ -78,6 +78,12 @@ class SIFoldOperandsImpl {
   bool frameIndexMayFold(const MachineInstr &UseMI, int OpNo,
                          const MachineOperand &OpToFold) const;
 
+  /// Fold %vgpr = COPY (S_ADD_I32 x, frameindex)
+  ///
+  ///   => %vgpr = V_ADD_U32 x, frameindex
+  bool foldCopyToVGPROfScalarAddOfFrameIndex(Register DstReg, Register SrcReg,
+                                             MachineInstr &MI) const;
+
   bool updateOperand(FoldCandidate &Fold) const;
 
   bool canUseImmWithOpSel(FoldCandidate &Fold) const;
@@ -224,6 +230,67 @@ bool SIFoldOperandsImpl::frameIndexMayFold(
   return OpNo == VIdx && SIdx == -1;
 }
 
+/// Fold %vgpr = COPY (S_ADD_I32 x, frameindex)
+///
+///   => %vgpr = V_ADD_U32 x, frameindex
+bool SIFoldOperandsImpl::foldCopyToVGPROfScalarAddOfFrameIndex(
+    Register DstReg, Register SrcReg, MachineInstr &MI) const {
+  if (TRI->isVGPR(*MRI, DstReg) && TRI->isSGPRReg(*MRI, SrcReg) &&
+      MRI->hasOneNonDBGUse(SrcReg)) {
+    MachineInstr *Def = MRI->getVRegDef(SrcReg);
+    if (Def && Def->getOpcode() == AMDGPU::S_ADD_I32 &&
+        Def->getOperand(3).isDead()) {
+      MachineOperand *Src0 = &Def->getOperand(1);
+      MachineOperand *Src1 = &Def->getOperand(2);
+
+      // TODO: This is profitable with more operand types, and for more
+      // opcodes. But ultimately this is working around poor / nonexistent
+      // regbankselect.
+      if (!Src0->isFI() && !Src1->isFI())
+        return false;
+
+      if (Src0->isFI())
+        std::swap(Src0, Src1);
+
+      MachineBasicBlock *MBB = Def->getParent();
+      const DebugLoc &DL = Def->getDebugLoc();
+      if (ST->hasAddNoCarry()) {
+        bool UseVOP3 = !Src0->isImm() || TII->isInlineConstant(*Src0);
+        MachineInstrBuilder Add =
+            BuildMI(*MBB, *Def, DL,
+                    TII->get(UseVOP3 ? AMDGPU::V_ADD_U32_e64
+                                     : AMDGPU::V_ADD_U32_e32),
+                    DstReg)
+                .add(*Src0)
+                .add(*Src1)
+                .setMIFlags(Def->getFlags());
+        if (UseVOP3)
+          Add.addImm(0);
+
+        Def->eraseFromParent();
+        MI.eraseFromParent();
+        return true;
+      }
+
+      MachineBasicBlock::LivenessQueryResult Liveness =
+          MBB->computeRegisterLiveness(TRI, AMDGPU::VCC, *Def, 16);
+      if (Liveness == MachineBasicBlock::LQR_Dead) {
+        // TODO: If src1 satisfies operand constraints, use vop3 version.
+        BuildMI(*MBB, *Def, DL, TII->get(AMDGPU::V_ADD_CO_U32_e32), DstReg)
+            .add(*Src0)
+            .add(*Src1)
+            .setOperandDead(3) // implicit-def $vcc
+            .setMIFlags(Def->getFlags());
+        Def->eraseFromParent();
+        MI.eraseFromParent();
+        return true;
+      }
+    }
+  }
+
+  return false;
+}
+
 FunctionPass *llvm::createSIFoldOperandsLegacyPass() {
   return new SIFoldOperandsLegacy();
 }
@@ -1470,9 +1537,10 @@ bool SIFoldOperandsImpl::foldInstOperand(MachineInstr &MI,
 
 bool SIFoldOperandsImpl::tryFoldFoldableCopy(
     MachineInstr &MI, MachineOperand *&CurrentKnownM0Val) const {
+  Register DstReg = MI.getOperand(0).getReg();
   // Specially track simple redefs of m0 to the same value in a block, so we
   // can erase the later ones.
-  if (MI.getOperand(0).getReg() == AMDGPU::M0) {
+  if (DstReg == AMDGPU::M0) {
     MachineOperand &NewM0Val = MI.getOperand(1);
     if (CurrentKnownM0Val && CurrentKnownM0Val->isIdenticalTo(NewM0Val)) {
       MI.eraseFromParent();
@@ -1504,13 +1572,17 @@ bool SIFoldOperandsImpl::tryFoldFoldableCopy(
   if (OpToFold.isReg() && !OpToFold.getReg().isVirtual())
     return false;
 
+  if (OpToFold.isReg() &&
+      foldCopyToVGPROfScalarAddOfFrameIndex(DstReg, OpToFold.getReg(), MI))
+    return true;
+
   // Prevent folding operands backwards in the function. For example,
   // the COPY opcode must not be replaced by 1 in this example:
   //
   //    %3 = COPY %vgpr0; VGPR_32:%3
   //    ...
   //    %vgpr0 = V_MOV_B32_e32 1, implicit %exec
-  if (!MI.getOperand(0).getReg().isVirtual())
+  if (!DstReg.isVirtual())
     return false;
 
   bool Changed = foldInstOperand(MI, OpToFold);
diff --git a/llvm/test/CodeGen/AMDGPU/fold-operands-s-add-copy-to-vgpr.mir b/llvm/test/CodeGen/AMDGPU/fold-operands-s-add-copy-to-vgpr.mir
new file mode 100644
index 00000000000000..683f02b413315e
--- /dev/null
+++ b/llvm/test/CodeGen/AMDGPU/fold-operands-s-add-copy-to-vgpr.mir
@@ -0,0 +1,394 @@
+# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py UTC_ARGS: --version 5
+# RUN: llc -mtriple=amdgcn -mcpu=gfx803 -verify-machineinstrs -run-pass=si-fold-operands %s -o - | FileCheck -check-prefixes=CHECK,GFX8 %s
+# RUN: llc -mtriple=amdgcn -mcpu=gfx900 -verify-machineinstrs -run-pass=si-fold-operands %s -o - | FileCheck -check-prefixes=CHECK,GFX9 %s
+# RUN: llc -mtriple=amdgcn -mcpu=gfx1030 -mattr=+wavefrontsize64 -verify-machineinstrs -run-pass=si-fold-operands %s -o - | FileCheck -check-prefixes=CHECK,GFX10 %s
+# RUN: llc -mtriple=amdgcn -mcpu=gfx1200 -mattr=+wavefrontsize64 -verify-machineinstrs -run-pass=si-fold-operands %s -o - | FileCheck -check-prefixes=CHECK,GFX10 %s
+
+---
+name:  copy_undef
+tracksRegLiveness: true
+stack:
+  - { id: 0, size: 16384, alignment: 4, local-offset: 0 }
+body:             |
+  bb.0:
+    ; CHECK-LABEL: name: copy_undef
+    ; CHECK: [[COPY:%[0-9]+]]:vgpr_32 = COPY undef %2:sreg_32
+    ; CHECK-NEXT: SI_RETURN implicit [[COPY]]
+    %0:sreg_32 = S_MOV_B32 %stack.0
+    %2:vgpr_32 = COPY undef %1:sreg_32
+    SI_RETURN implicit %2
+...
+
+---
+name:  fold_s_add_i32__mov_fi_const_copy_to_virt_vgpr
+tracksRegLiveness: true
+stack:
+  - { id: 0, size: 16384, alignment: 4, local-offset: 0 }
+body:             |
+  bb.0:
+    ; GFX8-LABEL: name: fold_s_add_i32__mov_fi_const_copy_to_virt_vgpr
+    ; GFX8: [[V_ADD_CO_U32_e32_:%[0-9]+]]:vgpr_32 = nuw V_ADD_CO_U32_e32 128, %stack.0, implicit-def dead $vcc, implicit $exec
+    ; GFX8-NEXT: SI_RETURN implicit [[V_ADD_CO_U32_e32_]]
+    ;
+    ; GFX9-LABEL: name: fold_s_add_i32__mov_fi_const_copy_to_virt_vgpr
+    ; GFX9: [[V_ADD_U32_e32_:%[0-9]+]]:vgpr_32 = nuw V_ADD_U32_e32 128, %stack.0, implicit $exec
+    ; GFX9-NEXT: SI_RETURN implicit [[V_ADD_U32_e32_]]
+    ;
+    ; GFX10-LABEL: name: fold_s_add_i32__mov_fi_const_copy_to_virt_vgpr
+    ; GFX10: [[V_ADD_U32_e32_:%[0-9]+]]:vgpr_32 = nuw V_ADD_U32_e32 128, %stack.0, implicit $exec
+    ; GFX10-NEXT: SI_RETURN implicit [[V_ADD_U32_e32_]]
+    %0:sreg_32 = S_MOV_B32 %stack.0
+    %1:sreg_32 = nuw S_ADD_I32 %0, 128, implicit-def dead $scc
+    %2:vgpr_32 = COPY %1
+    SI_RETURN implicit %2
+...
+
+---
+name:  fold_s_add_i32__const_copy_mov_fi_to_virt_vgpr
+tracksRegLiveness: true
+stack:
+  - { id: 0, size: 16384, alignment: 4, local-offset: 0 }
+body:             |
+  bb.0:
+    ; GFX8-LABEL: name: fold_s_add_i32__const_copy_mov_fi_to_virt_vgpr
+    ; GFX8: [[V_ADD_CO_U32_e32_:%[0-9]+]]:vgpr_32 = V_ADD_CO_U32_e32 128, %stack.0, implicit-def dead $vcc, implicit $exec
+    ; GFX8-NEXT: SI_RETURN implicit [[V_ADD_CO_U32_e32_]]
+    ;
+    ; GFX9-LABEL: name: fold_s_add_i32__const_copy_mov_fi_to_virt_vgpr
+    ; GFX9: [[V_ADD_U32_e32_:%[0-9]+]]:vgpr_32 = V_ADD_U32_e32 128, %stack.0, implicit $exec
+    ; GFX9-NEXT: SI_RETURN implicit [[V_ADD_U32_e32_]]
+    ;
+    ; GFX10-LABEL: name: fold_s_add_i32__const_copy_mov_fi_to_virt_vgpr
+    ; GFX10: [[V_ADD_U32_e32_:%[0-9]+]]:vgpr_32 = V_ADD_U32_e32 128, %stack.0, implicit $exec
+    ; GFX10-NEXT: SI_RETURN implicit [[V_ADD_U32_e32_]]
+    %0:sreg_32 = S_MOV_B32 %stack.0
+    %1:sreg_32 = S_ADD_I32 128, %0, implicit-def dead $scc
+    %2:vgpr_32 = COPY %1
+    SI_RETURN implicit %2
+...
+
+---
+name:  fold_s_add_i32__fi_imm_copy_to_virt_vgpr
+tracksRegLiveness: true
+stack:
+  - { id: 0, size: 16384, alignment: 4, local-offset: 0 }
+body:             |
+  bb.0:
+    ; GFX8-LABEL: name: fold_s_add_i32__fi_imm_copy_to_virt_vgpr
+    ; GFX8: [[V_ADD_CO_U32_e32_:%[0-9]+]]:vgpr_32 = nuw V_ADD_CO_U32_e32 64, %stack.0, implicit-def dead $vcc, implicit $exec
+    ; GFX8-NEXT: SI_RETURN implicit [[V_ADD_CO_U32_e32_]]
+    ;
+    ; GFX9-LABEL: name: fold_s_add_i32__fi_imm_copy_to_virt_vgpr
+    ; GFX9: [[V_ADD_U32_e64_:%[0-9]+]]:vgpr_32 = nuw V_ADD_U32_e64 64, %stack.0, 0, implicit $exec
+    ; GFX9-NEXT: SI_RETURN implicit [[V_ADD_U32_e64_]]
+    ;
+    ; GFX10-LABEL: name: fold_s_add_i32__fi_imm_copy_to_virt_vgpr
+    ; GFX10: [[V_ADD_U32_e64_:%[0-9]+]]:vgpr_32 = nuw V_ADD_U32_e64 64, %stack.0, 0, implicit $exec
+    ; GFX10-NEXT: SI_RETURN implicit [[V_ADD_U32_e64_]]
+    %0:sreg_32 = nuw S_ADD_I32 %stack.0, 64, implicit-def dead $scc
+    %1:vgpr_32 = COPY %0
+    SI_RETURN implicit %1
+...
+
+---
+name:  fold_s_add_i32__imm_fi_copy_to_virt_vgpr
+tracksRegLiveness: true
+stack:
+  - { id: 0, size: 16384, alignment: 4, local-offset: 0 }
+body:             |
+  bb.0:
+    ; GFX8-LABEL: name: fold_s_add_i32__imm_fi_copy_to_virt_vgpr
+    ; GFX8: [[V_ADD_CO_U32_e32_:%[0-9]+]]:vgpr_32 = nuw V_ADD_CO_U32_e32 64, %stack.0, implicit-def dead $vcc, implicit $exec
+    ; GFX8-NEXT: SI_RETURN implicit [[V_ADD_CO_U32_e32_]]
+    ;
+    ; GFX9-LABEL: name: fold_s_add_i32__imm_fi_copy_to_virt_vgpr
+    ; GFX9: [[V_ADD_U32_e64_:%[0-9]+]]:vgpr_32 = nuw V_ADD_U32_e64 64, %stack.0, 0, implicit $exec
+    ; GFX9-NEXT: SI_RETURN implicit [[V_ADD_U32_e64_]]
+    ;
+    ; GFX10-LABEL: name: fold_s_add_i32__imm_fi_copy_to_virt_vgpr
+    ; GFX10: [[V_ADD_U32_e64_:%[0-9]+]]:vgpr_32 = nuw V_ADD_U32_e64 64, %stack.0, 0, implicit $exec
+    ; GFX10-NEXT: SI_RETURN implicit [[V_ADD_U32_e64_]]
+    %0:sreg_32 = nuw S_ADD_I32 64, %stack.0, implicit-def dead $scc
+    %1:vgpr_32 = COPY %0
+    SI_RETURN implicit %1
+...
+
+---
+name:  fold_s_add_i32__mov_fi_const_copy_to_phys_vgpr
+tracksRegLiveness: true
+stack:
+  - { id: 0, size: 16384, alignment: 4, local-offset: 0 }
+body:             |
+  bb.0:
+    ; GFX8-LABEL: name: fold_s_add_i32__mov_fi_const_copy_to_phys_vgpr
+    ; GFX8: $vgpr0 = V_ADD_CO_U32_e32 128, %stack.0, implicit-def dead $vcc, implicit $exec
+    ; GFX8-NEXT: SI_RETURN implicit $vgpr0
+    ;
+    ; GFX9-LABEL: name: fold_s_add_i32__mov_fi_const_copy_to_phys_vgpr
+    ; GFX9: $vgpr0 = V_ADD_U32_e32 128, %stack.0, implicit $exec
+    ; GFX9-NEXT: SI_RETURN implicit $vgpr0
+    ;
+    ; GFX10-LABEL: name: fold_s_add_i32__mov_fi_const_copy_to_phys_vgpr
+    ; GFX10: $vgpr0 = V_ADD_U32_e32 128, %stack.0, implicit $exec
+    ; GFX10-NEXT: SI_RETURN implicit $vgpr0
+    %0:sreg_32 = S_MOV_B32 %stack.0
+    %1:sreg_32 = S_ADD_I32 %0, 128, implicit-def dead $scc
+    $vgpr0 = COPY %1
+    SI_RETURN implicit $vgpr0
+...
+
+---
+name:  fold_s_add_i32__mov_fi_const_copy_to_virt_vgpr_live_vcc
+tracksRegLiveness: true
+stack:
+  - { id: 0, size: 16384, alignment: 4, local-offset: 0 }
+body:             |
+  bb.0:
+    liveins: $vcc
+    ; GFX8-LABEL: name: fold_s_add_i32__mov_fi_const_copy_to_virt_vgpr_live_vcc
+    ; GFX8: liveins: $vcc
+    ; GFX8-NEXT: {{  $}}
+    ; GFX8-NEXT: [[S_ADD_I32_:%[0-9]+]]:sreg_32 = S_ADD_I32 %stack.0, 128, implicit-def dead $scc
+    ; GFX8-NEXT: [[COPY:%[0-9]+]]:vgpr_32 = COPY [[S_ADD_I32_]]
+    ; GFX8-NEXT: SI_RETURN implicit [[COPY]], implicit $vcc
+    ;
+    ; GFX9-LABEL: name: fold_s_add_i32__mov_fi_const_copy_to_virt_vgpr_live_vcc
+    ; GFX9: liveins: $vcc
+    ; GFX9-NEXT: {{  $}}
+    ; GFX9-NEXT: [[V_ADD_U32_e32_:%[0-9]+]]:vgpr_32 = V_ADD_U32_e32 128, %stack.0, implicit $exec
+    ; GFX9-NEXT: SI_RETURN implicit [[V_ADD_U32_e32_]], implicit $vcc
+    ;
+    ; GFX10-LABEL: name: fold_s_add_i32__mov_fi_const_copy_to_virt_vgpr_live_vcc
+    ; GFX10: liveins: $vcc
+    ; GFX10-NEXT: {{  $}}
+    ; GFX10-NEXT: [[V_ADD_U32_e32_:%[0-9]+]]:vgpr_32 = V_ADD_U32_e32 128, %stack.0, implicit $exec
+    ; GFX10-NEXT: SI_RETURN implicit [[V_ADD_U32_e32_]], implicit $vcc
+    %0:sreg_32 = S_MOV_B32 %stack.0
+    %1:sreg_32 = S_ADD_I32 %0, 128, implicit-def dead $scc
+    %2:vgpr_32 = COPY %1
+    SI_RETURN implicit %2, implicit $vcc
+...
+
+---
+name:  fold_s_add_i32__mov_fi_const_copy_to_virt_vgpr_live_scc
+tracksRegLiveness: true
+frameInfo:
+  maxAlignment:    4
+  localFrameSize:  16384
+stack:
+  - { id: 0, size: 16384, alignment: 4, local-offset: 0 }
+body:             |
+  bb.0:
+    ; CHECK-LABEL: name: fold_s_add_i32__mov_fi_const_copy_to_virt_vgpr_live_scc
+    ; CHECK: [[S_ADD_I32_:%[0-9]+]]:sreg_32 = S_ADD_I32 %stack.0, 128, implicit-def $scc
+    ; CHECK-NEXT: [[COPY:%[0-9]+]]:vgpr_32 = COPY [[S_ADD_I32_]]
+    ; CHECK-NEXT: SI_RETURN implicit [[COPY]], implicit $scc
+    %0:sreg_32 = S_MOV_B32 %stack.0
+    %1:sreg_32 = S_ADD_I32 %0, 128, implicit-def $scc
+    %2:vgpr_32 = COPY %1
+    SI_RETURN implicit %2, implicit $scc
+...
+
+---
+name:  fold_s_add_i32__mov_fi_reg_copy_to_virt_vgpr
+tracksRegLiveness: true
+stack:
+  - { id: 0, size: 16384, alignment: 4, local-offset: 0 }
+body:             |
+  bb.0:
+    liveins: $sgpr8
+
+    ; GFX8-LABEL: name: fold_s_add_i32__mov_fi_reg_copy_to_virt_vgpr
+    ; GFX8: liveins: $sgpr8
+    ; GFX8-NEXT: {{  $}}
+    ; GFX8-NEXT: [[COPY:%[0-9]+]]:sreg_32 = COPY $sgpr8
+    ; GFX8-NEXT: [[V_ADD_CO_U32_e32_:%[0-9]+]]:vgpr_32 = V_ADD_CO_U32_e32 [[COPY]], %stack.0, implicit-def dead $vcc, implicit $exec
+    ; GFX8-NEXT: SI_RETURN implicit [[V_ADD_CO_U32_e32_]]
+    ;
+    ; GFX9-LABEL: name: fold_s_add_i32__mov_fi_reg_copy_to_virt_vgpr
+    ; GFX9: liveins: $sgpr8
+    ; GFX9-NEXT: {{  $}}
+    ; GFX9-NEXT: [[COPY:%[0-9]+]]:sreg_32 = COPY $sgpr8
+    ; GFX9-NEXT: [[V_ADD_U32_e64_:%[0-9]+]]:vgpr_32 = V_ADD_U32_e64 [[COPY]], %stack.0, 0, implicit $exec
+    ; GFX9-NEXT: SI_RETURN implicit [[V_ADD_U32_e64_]]
+    ;
+    ; GFX10-LABEL: name: fold_s_add_i32__mov_fi_reg_copy_to_virt_vgpr
+    ; GFX10: liveins: $sgpr8
+    ; GFX10-NEXT: {{  $}}
+    ; GFX10-NEXT: [[COPY:%[0-9]+]]:sreg_32 = COPY $sgpr8
+    ; GFX10-NEXT: [[V_ADD_U32_e64_:%[0-9]+]]:vgpr_32 = V_ADD_U32_e64 [[COPY]], %stack.0, 0, implicit $exec
+    ; GFX10-NEXT: SI_RETURN implicit [[V_ADD_U32_e64_]]
+    %0:sreg_32 = COPY $sgpr8
+    %1:sreg_32 = S_MOV_B32 %stack.0
+    %2:sreg_32 = S_ADD_I32 %0, %1, implicit-def dead $scc
+    %3:vgpr_32 = COPY %2
+    SI_RETURN implicit %3
+...
+
+
+---
+name:  fold_s_add_i32__reg_copy_mov_fi_to_virt_vgpr
+tracksRegLiveness: true
+stack:
+  - { id: 0, size: 16384, alignment: 4, local-offset: 0 }
+body:             |
+  bb.0:
+    liveins: $sgpr8
+
+    ; GFX8-LABEL: name: fold_s_add_i32__reg_copy_mov_fi_to_virt_vgpr
+    ; GFX8: liveins: $sgpr8
+    ; GFX8-NEXT: {{  $}}
+    ; GFX8-NEXT: [[COPY:%[0-9]+]]:sreg_32 = COPY $sgpr8
+    ; GFX8-NEXT: [[V_ADD_CO_U32_e32_:%[0-9]+]]:vgpr_32 = V_ADD_CO_U32_e32 [[COPY]], %stack.0, implicit-def dead $vcc, implicit $exec
+    ; GFX8-NEXT: SI_RETURN implicit [[V_ADD_CO_U32_e32_]]
+    ;
+    ; GFX9-LABEL: name: fold_s_add_i32__reg_copy_mov_fi_to_virt_vgpr
+    ; GFX9: liveins: $sgpr8
+    ; GFX9-NEXT: {{  $}}
+    ; GFX9-NEXT: [[COPY:%[0-9]+]]:sreg_32 = COPY $sgpr8
+    ; GFX9-NEXT: [[V_ADD_U32_e64_:%[0-9]+]]:vgpr_32 = V_ADD_U32_e64 [[COPY]], %stack.0, 0, implicit $exec
+    ; GFX9-NEXT: SI_RETURN implicit [[V_ADD_U32_e64_]]
+    ;
+    ; GFX10-LABEL: name: fold_s_add_i32__reg_copy_mov_fi_to_virt_vgpr
+    ; GFX10: liveins: $sgpr8
+    ; GFX10-NEXT: {{  $}}
+    ; GFX10-NEXT: [[COPY:%[0-9]+]]:sreg_32 = COPY $sgpr8
+    ; GFX10-NEXT: [[V_ADD_U32_e64_:%[0-9]+]]:vgpr_32 = V_ADD_U32_e64 [[COPY]], %stack.0, 0, implicit $exec
+    ; GFX10-NEXT: SI_RETURN implicit [[V_ADD_U32_e64_]]
+    %0:sreg_32 = COPY $sgpr8
+    %1:sreg_32 = S_MOV_B32 %stack.0
+    %2:sreg_32 = S_ADD_I32 %1, %0, implicit-def dead $scc
+    %3:vgpr_32 = COPY %2
+    SI_RETURN implicit %3
+...
+
+---
+name:  fold_s_add_i32__fi_fi_copy_to_virt_vgpr
+tracksRegLiveness: true
+stack:
+  - { id: 0, size: 16384, alignment: 4, local-offset: 0 }
+  - { id: 1, size: 16384, alignment: 4, local-offset: 0 }
+body:             |
+  bb.0:
+    ; CHECK-LABEL: name: fold_s_add_i32__fi_fi_copy_to_virt_vgpr
+    ; CHECK: [[S_ADD_I32_:%[0-9]+]]:sreg_32 = S_ADD_I32 %stack.0, %stack.1, implicit-def dead $scc
+    ; CHECK-NEXT: [[COPY:%[0-9]+]]:vgpr_32 = COPY [[COPY]]
+    ; CHECK-NEXT: SI_RETURN implicit [[COPY]]
+    %0:sreg_32 = S_ADD_I32 %stack.0, %stack.1, implicit-def dead $scc
+    %1:vgpr_32 = COPY %1
+    SI_RETURN implicit %1
+...
+
+---
+name:  fold_s_add_i32__fi_const_copy_to_virt_vgpr
+tracksRegLiveness: true
+stack:
+  - { id: 0, size: 16384, alignment: 4, local-offset: 0 }
+body:             |
+  bb.0:
+    ; GFX8-LABEL: name: fold_s_add_i32__fi_const_copy_to_virt_vgpr
+    ; GFX8: [[V_ADD_CO_U32_e32_:%[0-9]+]]:vgpr_32 = V_ADD_CO_U32_e32 128, %stack.0, implicit-def dead $vcc, implicit $exec
+    ; GFX8-NEXT: SI_RETURN implicit [[V_ADD_CO_U32_e32_]]
+    ;
+    ; GFX9-LABEL: name: fold_s_add_i32__fi_const_copy_to_virt_vgpr
+    ; GFX9: [[V_ADD_U32_e32_:%[0-9]+]]:vgpr_32 = V_ADD_U32_e32 128, %stack.0, implicit $exec
+    ; GFX9-NEXT: SI_RETURN implicit [[V_ADD_U32_e32_]]
+    ;
+    ; GFX10-LABEL: name: fold_s_add_i32__fi_const_copy_to_virt_vgpr
+    ; GFX10: [[V_ADD_U32_e32_:%[0-9]+]]:vgpr_32 = V_ADD_U32_e32 128, %stack.0, implicit $exec
+    ; GFX10-NEXT: SI_RETURN implicit [[V_ADD_U32_e32_]]
+    %0:sreg_32 = S_ADD_I32 %stack.0, 128, implicit-def dead $scc
+    %1:vgpr_32 = COPY %0
+    SI_RETURN implicit %1
+...
+
+---
+name:  fold_s_add_i32__const_fi_copy_to_virt_vgpr
+tracksRegLiveness: true
+stack:
+  - { id: 0, size: 16384, alignment: 4, local-offset: 0 }
+body:             |
+  bb.0:
+    ; GFX8-LABEL: name: fold_s_add_i32__const_fi_copy_to_virt_vgpr
+    ; GFX8: [[V_ADD_CO_U32_e32_:%[0-9]+]]:vgpr_32 = V_ADD_CO_U32_e32 128, %stack.0, implicit-def dead $vcc, implicit $exec
+    ; GFX8-NEXT: SI_RETURN implicit [[V_ADD_CO_U32_e32_]]
+    ;
+    ; GFX9-LABEL: name: fold_s_add_i32__const_fi_copy_to_virt_vgpr
+    ; GFX9: [[V_ADD_U32_e32_:%[0-9]+]]:vgpr_32 = V_ADD_U32_e32 128, %stack.0, implicit $exec
+    ; GFX9-NEXT: SI_RETURN implicit [[V_ADD_U32_e32_]]
+    ;
+    ; GFX10-LABEL: name: fold_s_add_i32__const_fi_copy_to_virt_vgpr
+    ; GFX10: [[V_ADD_U32_e32_:%[0-9]+]]:vgpr_32 = V_ADD_U32_e32 128, %stack.0, implicit $exec
+    ; GFX10-NEXT: SI_RETURN implicit [[V_ADD_U32_e32_]]
+    %0:sreg_32 = S_ADD_I32 128, %stack.0, implicit-def dead $scc
+    %1:vgpr_32 = COPY %0
+    SI_RETURN implicit %1
+...
+
+---
+name:  fold_s_add_i32__fi_reg_copy_to_virt_vgpr
+tracksRegLiveness: true
+stack:
+  - { id: 0, size: 16384, alignment: 4, local-offset: 0 }
+body:             |
+  bb.0:
+    liveins: $sgpr8
+    ; GFX8-LABEL: name: fold_s_add_i32__fi_reg_copy_to_virt_vgpr
+    ; GFX8: liveins: $sgpr8
+    ; GFX8-NEXT: {{  $}}
+    ; GFX8-NEXT: [[COPY:%[0-9]+]]:sreg_32 = COPY $sgpr8
+    ; GFX8-NEXT: [[V_ADD_CO_U32_e32_:%[0-9]+]]:vgpr_32 = V_ADD_CO_U32_e32 [[COPY]], %stack.0, implicit-def dead $vcc, implicit $exec
+    ; GFX8-NEXT: SI_RETURN implicit [[V_ADD_CO_U32_e32_]]
+    ;
+    ; GFX9-LABEL: name: fold_s_add_i32__fi_reg_copy_to_virt_vgpr
+    ; GFX9: liveins: $sgpr8
+    ; GFX9-NEXT: {{  $}}
+    ; GFX9-NEXT: [[COPY:%[0-9]+]]:sreg_32 = COPY $sgpr8
+    ; GFX9-NEXT: [[V_ADD_U32_e64_:%[0-9]+]]:vgpr_32 = V_ADD_U32_e64 [[COPY]], %stack.0, 0, implicit $exec
+    ; GFX9-NEXT: SI_RETURN implicit [[V_ADD_U32_e64_]]
+    ;
+    ; GFX10-LABEL: name: fold_s_add_i32__fi_reg_copy_to_virt_vgpr
+    ; GFX10: liveins: $sgpr8
+    ; GFX10-NEXT: {{  $}}
+    ; GFX10-NEXT: [[COPY:%[0-9]+]]:sreg_32 = COPY $sgpr8
+    ; GFX10-NEXT: [[V_ADD_U32_e64_:%[0-9]+]]:vgpr_32 = V_ADD_U32_e64 [[COPY]], %stack.0, 0, implicit $exec
+    ; GFX10-NEXT: SI_RETURN implicit [[V_ADD_U32_e64_]]
+    %0:sreg_32 = COPY $sgpr8
+    %1:sreg_32 = S_ADD_I32 %stack.0, %0, implicit-def dead $scc
+    %2:vgpr_32 = COPY %1
+    SI_RETURN implicit %2
+...
+
+---
+name:  fold_s_add_i32__reg_fi_copy_to_virt_vgpr
+tracksRegLiveness: true
+stack:
+  - { id: 0, size: 16384, alignment: 4, local-offset: 0 }
+body:             |
+  bb.0:
+    liveins: $sgpr8
+    ; GFX8-LABEL: name: fold_s_add_i32__reg_fi_copy_to_virt_vgpr
+    ; GFX8: liveins: $sgpr8
+    ; GFX8-NEXT: {{  $}}
+    ; GFX8-NEXT: [[COPY:%[0-9]+]]:sreg_32 = COPY $sgpr8
+    ; GFX8-NEXT: [[V_ADD_CO_U32_e32_:%[0-9]+]]:vgpr_32 = V_ADD_CO_U32...
[truncated]

@arsenm arsenm marked this pull request as ready for review November 5, 2024 20:42
@@ -78,6 +78,12 @@ class SIFoldOperandsImpl {
bool frameIndexMayFold(const MachineInstr &UseMI, int OpNo,
const MachineOperand &OpToFold) const;

/// Fold %vgpr = COPY (S_ADD_I32 x, frameindex)
///
/// => %vgpr = V_ADD_U32 x, frameindex
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What prevents it from constant bus violation?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Frame index is treated as if it were a VGPR since a few months ago, so we don't need to care what the other operand is

Copy link
Contributor Author

arsenm commented Nov 6, 2024

Merge activity

  • Nov 6, 12:09 PM EST: A user started a stack merge that includes this pull request via Graphite.
  • Nov 6, 12:11 PM EST: A user merged this pull request with Graphite.

@arsenm arsenm merged commit aa79412 into main Nov 6, 2024
12 checks passed
@arsenm arsenm deleted the users/arsenm/amdgpu-si-fold-operands-frame-index-into-add branch November 6, 2024 17:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants