Skip to content

[CodeGen] Get stack alignment from TargetFrameLowering #105477

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

s-barannikov
Copy link
Contributor

It is both easier to use and safer than DL.getStackAlignment() because the DataLayout method asserts if the stack alignment wasn't specified. DL.getStackAlignment() is supposed to be used by middle-end passes.

It is both easier to use and safer than DL.getStackAlignment() because
the DataLayout method asserts if the stack alignment wasn't specified.
DL.getStackAlignment() is supposed to be used by middle-end passes.
@llvmbot
Copy link
Member

llvmbot commented Aug 21, 2024

@llvm/pr-subscribers-backend-amdgpu
@llvm/pr-subscribers-backend-x86
@llvm/pr-subscribers-llvm-globalisel

@llvm/pr-subscribers-llvm-selectiondag

Author: Sergei Barannikov (s-barannikov)

Changes

It is both easier to use and safer than DL.getStackAlignment() because the DataLayout method asserts if the stack alignment wasn't specified. DL.getStackAlignment() is supposed to be used by middle-end passes.


Full diff: https://github.com/llvm/llvm-project/pull/105477.diff

6 Files Affected:

  • (modified) llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp (+8-6)
  • (modified) llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp (+12-9)
  • (modified) llvm/lib/Target/AArch64/AArch64CallingConvention.cpp (+1-2)
  • (modified) llvm/lib/Target/ARM/ARMCallingConv.cpp (+4-3)
  • (modified) llvm/lib/Target/ARM/ARMISelLowering.cpp (+4-5)
  • (modified) llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp (+3-3)
diff --git a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
index bdbef20e20960d..498b37f9360090 100644
--- a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
@@ -9133,9 +9133,10 @@ LegalizerHelper::lowerMemcpy(MachineInstr &MI, Register Dst, Register Src,
     // Don't promote to an alignment that would require dynamic stack
     // realignment.
     const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
-    if (!TRI->hasStackRealignment(MF))
-      while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
-        NewAlign = NewAlign.previous();
+    if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+      const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+      NewAlign = std::min(NewAlign, TFL->getStackAlign());
+    }
 
     if (NewAlign > Alignment) {
       Alignment = NewAlign;
@@ -9241,9 +9242,10 @@ LegalizerHelper::lowerMemmove(MachineInstr &MI, Register Dst, Register Src,
     // Don't promote to an alignment that would require dynamic stack
     // realignment.
     const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
-    if (!TRI->hasStackRealignment(MF))
-      while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
-        NewAlign = NewAlign.previous();
+    if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+      const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+      NewAlign = std::min(NewAlign, TFL->getStackAlign());
+    }
 
     if (NewAlign > Alignment) {
       Alignment = NewAlign;
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
index 18a3b7bce104a7..ebfa8fb7d6395d 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
@@ -7893,9 +7893,10 @@ static SDValue getMemcpyLoadsAndStores(SelectionDAG &DAG, const SDLoc &dl,
     // realignment which may conflict with optimizations such as tail call
     // optimization.
     const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
-    if (!TRI->hasStackRealignment(MF))
-      while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
-        NewAlign = NewAlign.previous();
+    if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+      const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+      NewAlign = std::min(NewAlign, TFL->getStackAlign());
+    }
 
     if (NewAlign > Alignment) {
       // Give the stack frame object a larger alignment if needed.
@@ -8088,9 +8089,10 @@ static SDValue getMemmoveLoadsAndStores(SelectionDAG &DAG, const SDLoc &dl,
     // realignment which may conflict with optimizations such as tail call
     // optimization.
     const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
-    if (!TRI->hasStackRealignment(MF))
-      while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
-        NewAlign = NewAlign.previous();
+    if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+      const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+      NewAlign = std::min(NewAlign, TFL->getStackAlign());
+    }
 
     if (NewAlign > Alignment) {
       // Give the stack frame object a larger alignment if needed.
@@ -8206,9 +8208,10 @@ static SDValue getMemsetStores(SelectionDAG &DAG, const SDLoc &dl,
     // realignment which may conflict with optimizations such as tail call
     // optimization.
     const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
-    if (!TRI->hasStackRealignment(MF))
-      while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
-        NewAlign = NewAlign.previous();
+    if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+      const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+      NewAlign = std::min(NewAlign, TFL->getStackAlign());
+    }
 
     if (NewAlign > Alignment) {
       // Give the stack frame object a larger alignment if needed.
diff --git a/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp b/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp
index 9a804c12939c4b..1a86af1edb3755 100644
--- a/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp
+++ b/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp
@@ -209,8 +209,7 @@ static bool CC_AArch64_Custom_Block(unsigned &ValNo, MVT &ValVT, MVT &LocVT,
       State.AllocateReg(Reg);
   }
 
-  const Align StackAlign =
-      State.getMachineFunction().getDataLayout().getStackAlignment();
+  const Align StackAlign = Subtarget.getFrameLowering()->getStackAlign();
   const Align MemAlign = ArgFlags.getNonZeroMemAlign();
   Align SlotAlign = std::min(MemAlign, StackAlign);
   if (!Subtarget.isTargetDarwin())
diff --git a/llvm/lib/Target/ARM/ARMCallingConv.cpp b/llvm/lib/Target/ARM/ARMCallingConv.cpp
index 4878c73138940d..243b21ad74c74a 100644
--- a/llvm/lib/Target/ARM/ARMCallingConv.cpp
+++ b/llvm/lib/Target/ARM/ARMCallingConv.cpp
@@ -189,8 +189,9 @@ static bool CC_ARM_AAPCS_Custom_Aggregate(unsigned ValNo, MVT ValVT,
 
   // Try to allocate a contiguous block of registers, each of the correct
   // size to hold one member.
-  auto &DL = State.getMachineFunction().getDataLayout();
-  const Align StackAlign = DL.getStackAlignment();
+  const auto &Subtarget =
+      State.getMachineFunction().getSubtarget<ARMSubtarget>();
+  const Align StackAlign = Subtarget.getFrameLowering()->getStackAlign();
   const Align FirstMemberAlign(PendingMembers[0].getExtraInfo());
   Align Alignment = std::min(FirstMemberAlign, StackAlign);
 
@@ -265,7 +266,7 @@ static bool CC_ARM_AAPCS_Custom_Aggregate(unsigned ValNo, MVT ValVT,
     State.AllocateReg(Reg);
 
   // Clamp the alignment between 4 and 8.
-  if (State.getMachineFunction().getSubtarget<ARMSubtarget>().isTargetAEABI())
+  if (Subtarget.isTargetAEABI())
     Alignment = ArgFlags.getNonZeroMemAlign() <= 4 ? Align(4) : Align(8);
 
   // After the first item has been allocated, the rest are packed as tightly as
diff --git a/llvm/lib/Target/ARM/ARMISelLowering.cpp b/llvm/lib/Target/ARM/ARMISelLowering.cpp
index 1e8bb8a495e68b..e8760ef5f1692e 100644
--- a/llvm/lib/Target/ARM/ARMISelLowering.cpp
+++ b/llvm/lib/Target/ARM/ARMISelLowering.cpp
@@ -2461,7 +2461,7 @@ ARMTargetLowering::LowerCall(TargetLowering::CallLoweringInfo &CLI,
 
     // Since callee will pop argument stack as a tail call, we must keep the
     // popped size 16-byte aligned.
-    Align StackAlign = DAG.getDataLayout().getStackAlignment();
+    Align StackAlign = Subtarget->getFrameLowering()->getStackAlign();
     NumBytes = alignTo(NumBytes, StackAlign);
 
     // SPDiff will be negative if this tail call requires more space than we
@@ -4711,9 +4711,8 @@ SDValue ARMTargetLowering::LowerFormalArguments(
   if (canGuaranteeTCO(CallConv, TailCallOpt)) {
     // The only way to guarantee a tail call is if the callee restores its
     // argument area, but it must also keep the stack aligned when doing so.
-    const DataLayout &DL = DAG.getDataLayout();
-    StackArgSize = alignTo(StackArgSize, DL.getStackAlignment());
-
+    Align StackAlign = Subtarget->getFrameLowering()->getStackAlign();
+    StackArgSize = alignTo(StackArgSize, StackAlign);
     AFI->setArgumentStackToRestore(StackArgSize);
   }
   AFI->setArgumentStackSize(StackArgSize);
@@ -22030,7 +22029,7 @@ Align ARMTargetLowering::getABIAlignmentForCallingConv(
 
   // Avoid over-aligning vector parameters. It would require realigning the
   // stack and waste space for no real benefit.
-  return std::min(ABITypeAlign, DL.getStackAlignment());
+  return std::min(ABITypeAlign, Subtarget->getFrameLowering()->getStackAlign());
 }
 
 /// Return true if a type is an AAPCS-VFP homogeneous aggregate or one of
diff --git a/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp b/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp
index 563601b722c803..be25cf35166ba8 100644
--- a/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp
+++ b/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp
@@ -1189,9 +1189,9 @@ WebAssemblyTargetLowering::LowerCall(CallLoweringInfo &CLI,
   if (IsVarArg && NumBytes) {
     // For non-fixed arguments, next emit stores to store the argument values
     // to the stack buffer at the offsets computed above.
-    int FI = MF.getFrameInfo().CreateStackObject(NumBytes,
-                                                 Layout.getStackAlignment(),
-                                                 /*isSS=*/false);
+    Align StackAlign = Subtarget->getFrameLowering()->getStackAlign();
+    int FI = MF.getFrameInfo().CreateStackObject(NumBytes, StackAlign,
+                                                 /*isSpillSlot=*/false);
     unsigned ValNo = 0;
     SmallVector<SDValue, 8> Chains;
     for (SDValue Arg : drop_begin(OutVals, NumFixedArgs)) {

@llvmbot
Copy link
Member

llvmbot commented Aug 21, 2024

@llvm/pr-subscribers-backend-aarch64

Author: Sergei Barannikov (s-barannikov)

Changes

It is both easier to use and safer than DL.getStackAlignment() because the DataLayout method asserts if the stack alignment wasn't specified. DL.getStackAlignment() is supposed to be used by middle-end passes.


Full diff: https://github.com/llvm/llvm-project/pull/105477.diff

6 Files Affected:

  • (modified) llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp (+8-6)
  • (modified) llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp (+12-9)
  • (modified) llvm/lib/Target/AArch64/AArch64CallingConvention.cpp (+1-2)
  • (modified) llvm/lib/Target/ARM/ARMCallingConv.cpp (+4-3)
  • (modified) llvm/lib/Target/ARM/ARMISelLowering.cpp (+4-5)
  • (modified) llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp (+3-3)
diff --git a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
index bdbef20e20960d..498b37f9360090 100644
--- a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
@@ -9133,9 +9133,10 @@ LegalizerHelper::lowerMemcpy(MachineInstr &MI, Register Dst, Register Src,
     // Don't promote to an alignment that would require dynamic stack
     // realignment.
     const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
-    if (!TRI->hasStackRealignment(MF))
-      while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
-        NewAlign = NewAlign.previous();
+    if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+      const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+      NewAlign = std::min(NewAlign, TFL->getStackAlign());
+    }
 
     if (NewAlign > Alignment) {
       Alignment = NewAlign;
@@ -9241,9 +9242,10 @@ LegalizerHelper::lowerMemmove(MachineInstr &MI, Register Dst, Register Src,
     // Don't promote to an alignment that would require dynamic stack
     // realignment.
     const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
-    if (!TRI->hasStackRealignment(MF))
-      while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
-        NewAlign = NewAlign.previous();
+    if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+      const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+      NewAlign = std::min(NewAlign, TFL->getStackAlign());
+    }
 
     if (NewAlign > Alignment) {
       Alignment = NewAlign;
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
index 18a3b7bce104a7..ebfa8fb7d6395d 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
@@ -7893,9 +7893,10 @@ static SDValue getMemcpyLoadsAndStores(SelectionDAG &DAG, const SDLoc &dl,
     // realignment which may conflict with optimizations such as tail call
     // optimization.
     const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
-    if (!TRI->hasStackRealignment(MF))
-      while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
-        NewAlign = NewAlign.previous();
+    if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+      const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+      NewAlign = std::min(NewAlign, TFL->getStackAlign());
+    }
 
     if (NewAlign > Alignment) {
       // Give the stack frame object a larger alignment if needed.
@@ -8088,9 +8089,10 @@ static SDValue getMemmoveLoadsAndStores(SelectionDAG &DAG, const SDLoc &dl,
     // realignment which may conflict with optimizations such as tail call
     // optimization.
     const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
-    if (!TRI->hasStackRealignment(MF))
-      while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
-        NewAlign = NewAlign.previous();
+    if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+      const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+      NewAlign = std::min(NewAlign, TFL->getStackAlign());
+    }
 
     if (NewAlign > Alignment) {
       // Give the stack frame object a larger alignment if needed.
@@ -8206,9 +8208,10 @@ static SDValue getMemsetStores(SelectionDAG &DAG, const SDLoc &dl,
     // realignment which may conflict with optimizations such as tail call
     // optimization.
     const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
-    if (!TRI->hasStackRealignment(MF))
-      while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
-        NewAlign = NewAlign.previous();
+    if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+      const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+      NewAlign = std::min(NewAlign, TFL->getStackAlign());
+    }
 
     if (NewAlign > Alignment) {
       // Give the stack frame object a larger alignment if needed.
diff --git a/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp b/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp
index 9a804c12939c4b..1a86af1edb3755 100644
--- a/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp
+++ b/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp
@@ -209,8 +209,7 @@ static bool CC_AArch64_Custom_Block(unsigned &ValNo, MVT &ValVT, MVT &LocVT,
       State.AllocateReg(Reg);
   }
 
-  const Align StackAlign =
-      State.getMachineFunction().getDataLayout().getStackAlignment();
+  const Align StackAlign = Subtarget.getFrameLowering()->getStackAlign();
   const Align MemAlign = ArgFlags.getNonZeroMemAlign();
   Align SlotAlign = std::min(MemAlign, StackAlign);
   if (!Subtarget.isTargetDarwin())
diff --git a/llvm/lib/Target/ARM/ARMCallingConv.cpp b/llvm/lib/Target/ARM/ARMCallingConv.cpp
index 4878c73138940d..243b21ad74c74a 100644
--- a/llvm/lib/Target/ARM/ARMCallingConv.cpp
+++ b/llvm/lib/Target/ARM/ARMCallingConv.cpp
@@ -189,8 +189,9 @@ static bool CC_ARM_AAPCS_Custom_Aggregate(unsigned ValNo, MVT ValVT,
 
   // Try to allocate a contiguous block of registers, each of the correct
   // size to hold one member.
-  auto &DL = State.getMachineFunction().getDataLayout();
-  const Align StackAlign = DL.getStackAlignment();
+  const auto &Subtarget =
+      State.getMachineFunction().getSubtarget<ARMSubtarget>();
+  const Align StackAlign = Subtarget.getFrameLowering()->getStackAlign();
   const Align FirstMemberAlign(PendingMembers[0].getExtraInfo());
   Align Alignment = std::min(FirstMemberAlign, StackAlign);
 
@@ -265,7 +266,7 @@ static bool CC_ARM_AAPCS_Custom_Aggregate(unsigned ValNo, MVT ValVT,
     State.AllocateReg(Reg);
 
   // Clamp the alignment between 4 and 8.
-  if (State.getMachineFunction().getSubtarget<ARMSubtarget>().isTargetAEABI())
+  if (Subtarget.isTargetAEABI())
     Alignment = ArgFlags.getNonZeroMemAlign() <= 4 ? Align(4) : Align(8);
 
   // After the first item has been allocated, the rest are packed as tightly as
diff --git a/llvm/lib/Target/ARM/ARMISelLowering.cpp b/llvm/lib/Target/ARM/ARMISelLowering.cpp
index 1e8bb8a495e68b..e8760ef5f1692e 100644
--- a/llvm/lib/Target/ARM/ARMISelLowering.cpp
+++ b/llvm/lib/Target/ARM/ARMISelLowering.cpp
@@ -2461,7 +2461,7 @@ ARMTargetLowering::LowerCall(TargetLowering::CallLoweringInfo &CLI,
 
     // Since callee will pop argument stack as a tail call, we must keep the
     // popped size 16-byte aligned.
-    Align StackAlign = DAG.getDataLayout().getStackAlignment();
+    Align StackAlign = Subtarget->getFrameLowering()->getStackAlign();
     NumBytes = alignTo(NumBytes, StackAlign);
 
     // SPDiff will be negative if this tail call requires more space than we
@@ -4711,9 +4711,8 @@ SDValue ARMTargetLowering::LowerFormalArguments(
   if (canGuaranteeTCO(CallConv, TailCallOpt)) {
     // The only way to guarantee a tail call is if the callee restores its
     // argument area, but it must also keep the stack aligned when doing so.
-    const DataLayout &DL = DAG.getDataLayout();
-    StackArgSize = alignTo(StackArgSize, DL.getStackAlignment());
-
+    Align StackAlign = Subtarget->getFrameLowering()->getStackAlign();
+    StackArgSize = alignTo(StackArgSize, StackAlign);
     AFI->setArgumentStackToRestore(StackArgSize);
   }
   AFI->setArgumentStackSize(StackArgSize);
@@ -22030,7 +22029,7 @@ Align ARMTargetLowering::getABIAlignmentForCallingConv(
 
   // Avoid over-aligning vector parameters. It would require realigning the
   // stack and waste space for no real benefit.
-  return std::min(ABITypeAlign, DL.getStackAlignment());
+  return std::min(ABITypeAlign, Subtarget->getFrameLowering()->getStackAlign());
 }
 
 /// Return true if a type is an AAPCS-VFP homogeneous aggregate or one of
diff --git a/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp b/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp
index 563601b722c803..be25cf35166ba8 100644
--- a/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp
+++ b/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp
@@ -1189,9 +1189,9 @@ WebAssemblyTargetLowering::LowerCall(CallLoweringInfo &CLI,
   if (IsVarArg && NumBytes) {
     // For non-fixed arguments, next emit stores to store the argument values
     // to the stack buffer at the offsets computed above.
-    int FI = MF.getFrameInfo().CreateStackObject(NumBytes,
-                                                 Layout.getStackAlignment(),
-                                                 /*isSS=*/false);
+    Align StackAlign = Subtarget->getFrameLowering()->getStackAlign();
+    int FI = MF.getFrameInfo().CreateStackObject(NumBytes, StackAlign,
+                                                 /*isSpillSlot=*/false);
     unsigned ValNo = 0;
     SmallVector<SDValue, 8> Chains;
     for (SDValue Arg : drop_begin(OutVals, NumFixedArgs)) {

@llvmbot
Copy link
Member

llvmbot commented Aug 21, 2024

@llvm/pr-subscribers-backend-webassembly

Author: Sergei Barannikov (s-barannikov)

Changes

It is both easier to use and safer than DL.getStackAlignment() because the DataLayout method asserts if the stack alignment wasn't specified. DL.getStackAlignment() is supposed to be used by middle-end passes.


Full diff: https://github.com/llvm/llvm-project/pull/105477.diff

6 Files Affected:

  • (modified) llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp (+8-6)
  • (modified) llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp (+12-9)
  • (modified) llvm/lib/Target/AArch64/AArch64CallingConvention.cpp (+1-2)
  • (modified) llvm/lib/Target/ARM/ARMCallingConv.cpp (+4-3)
  • (modified) llvm/lib/Target/ARM/ARMISelLowering.cpp (+4-5)
  • (modified) llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp (+3-3)
diff --git a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
index bdbef20e20960d..498b37f9360090 100644
--- a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
@@ -9133,9 +9133,10 @@ LegalizerHelper::lowerMemcpy(MachineInstr &MI, Register Dst, Register Src,
     // Don't promote to an alignment that would require dynamic stack
     // realignment.
     const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
-    if (!TRI->hasStackRealignment(MF))
-      while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
-        NewAlign = NewAlign.previous();
+    if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+      const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+      NewAlign = std::min(NewAlign, TFL->getStackAlign());
+    }
 
     if (NewAlign > Alignment) {
       Alignment = NewAlign;
@@ -9241,9 +9242,10 @@ LegalizerHelper::lowerMemmove(MachineInstr &MI, Register Dst, Register Src,
     // Don't promote to an alignment that would require dynamic stack
     // realignment.
     const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
-    if (!TRI->hasStackRealignment(MF))
-      while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
-        NewAlign = NewAlign.previous();
+    if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+      const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+      NewAlign = std::min(NewAlign, TFL->getStackAlign());
+    }
 
     if (NewAlign > Alignment) {
       Alignment = NewAlign;
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
index 18a3b7bce104a7..ebfa8fb7d6395d 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
@@ -7893,9 +7893,10 @@ static SDValue getMemcpyLoadsAndStores(SelectionDAG &DAG, const SDLoc &dl,
     // realignment which may conflict with optimizations such as tail call
     // optimization.
     const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
-    if (!TRI->hasStackRealignment(MF))
-      while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
-        NewAlign = NewAlign.previous();
+    if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+      const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+      NewAlign = std::min(NewAlign, TFL->getStackAlign());
+    }
 
     if (NewAlign > Alignment) {
       // Give the stack frame object a larger alignment if needed.
@@ -8088,9 +8089,10 @@ static SDValue getMemmoveLoadsAndStores(SelectionDAG &DAG, const SDLoc &dl,
     // realignment which may conflict with optimizations such as tail call
     // optimization.
     const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
-    if (!TRI->hasStackRealignment(MF))
-      while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
-        NewAlign = NewAlign.previous();
+    if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+      const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+      NewAlign = std::min(NewAlign, TFL->getStackAlign());
+    }
 
     if (NewAlign > Alignment) {
       // Give the stack frame object a larger alignment if needed.
@@ -8206,9 +8208,10 @@ static SDValue getMemsetStores(SelectionDAG &DAG, const SDLoc &dl,
     // realignment which may conflict with optimizations such as tail call
     // optimization.
     const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
-    if (!TRI->hasStackRealignment(MF))
-      while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
-        NewAlign = NewAlign.previous();
+    if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+      const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+      NewAlign = std::min(NewAlign, TFL->getStackAlign());
+    }
 
     if (NewAlign > Alignment) {
       // Give the stack frame object a larger alignment if needed.
diff --git a/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp b/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp
index 9a804c12939c4b..1a86af1edb3755 100644
--- a/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp
+++ b/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp
@@ -209,8 +209,7 @@ static bool CC_AArch64_Custom_Block(unsigned &ValNo, MVT &ValVT, MVT &LocVT,
       State.AllocateReg(Reg);
   }
 
-  const Align StackAlign =
-      State.getMachineFunction().getDataLayout().getStackAlignment();
+  const Align StackAlign = Subtarget.getFrameLowering()->getStackAlign();
   const Align MemAlign = ArgFlags.getNonZeroMemAlign();
   Align SlotAlign = std::min(MemAlign, StackAlign);
   if (!Subtarget.isTargetDarwin())
diff --git a/llvm/lib/Target/ARM/ARMCallingConv.cpp b/llvm/lib/Target/ARM/ARMCallingConv.cpp
index 4878c73138940d..243b21ad74c74a 100644
--- a/llvm/lib/Target/ARM/ARMCallingConv.cpp
+++ b/llvm/lib/Target/ARM/ARMCallingConv.cpp
@@ -189,8 +189,9 @@ static bool CC_ARM_AAPCS_Custom_Aggregate(unsigned ValNo, MVT ValVT,
 
   // Try to allocate a contiguous block of registers, each of the correct
   // size to hold one member.
-  auto &DL = State.getMachineFunction().getDataLayout();
-  const Align StackAlign = DL.getStackAlignment();
+  const auto &Subtarget =
+      State.getMachineFunction().getSubtarget<ARMSubtarget>();
+  const Align StackAlign = Subtarget.getFrameLowering()->getStackAlign();
   const Align FirstMemberAlign(PendingMembers[0].getExtraInfo());
   Align Alignment = std::min(FirstMemberAlign, StackAlign);
 
@@ -265,7 +266,7 @@ static bool CC_ARM_AAPCS_Custom_Aggregate(unsigned ValNo, MVT ValVT,
     State.AllocateReg(Reg);
 
   // Clamp the alignment between 4 and 8.
-  if (State.getMachineFunction().getSubtarget<ARMSubtarget>().isTargetAEABI())
+  if (Subtarget.isTargetAEABI())
     Alignment = ArgFlags.getNonZeroMemAlign() <= 4 ? Align(4) : Align(8);
 
   // After the first item has been allocated, the rest are packed as tightly as
diff --git a/llvm/lib/Target/ARM/ARMISelLowering.cpp b/llvm/lib/Target/ARM/ARMISelLowering.cpp
index 1e8bb8a495e68b..e8760ef5f1692e 100644
--- a/llvm/lib/Target/ARM/ARMISelLowering.cpp
+++ b/llvm/lib/Target/ARM/ARMISelLowering.cpp
@@ -2461,7 +2461,7 @@ ARMTargetLowering::LowerCall(TargetLowering::CallLoweringInfo &CLI,
 
     // Since callee will pop argument stack as a tail call, we must keep the
     // popped size 16-byte aligned.
-    Align StackAlign = DAG.getDataLayout().getStackAlignment();
+    Align StackAlign = Subtarget->getFrameLowering()->getStackAlign();
     NumBytes = alignTo(NumBytes, StackAlign);
 
     // SPDiff will be negative if this tail call requires more space than we
@@ -4711,9 +4711,8 @@ SDValue ARMTargetLowering::LowerFormalArguments(
   if (canGuaranteeTCO(CallConv, TailCallOpt)) {
     // The only way to guarantee a tail call is if the callee restores its
     // argument area, but it must also keep the stack aligned when doing so.
-    const DataLayout &DL = DAG.getDataLayout();
-    StackArgSize = alignTo(StackArgSize, DL.getStackAlignment());
-
+    Align StackAlign = Subtarget->getFrameLowering()->getStackAlign();
+    StackArgSize = alignTo(StackArgSize, StackAlign);
     AFI->setArgumentStackToRestore(StackArgSize);
   }
   AFI->setArgumentStackSize(StackArgSize);
@@ -22030,7 +22029,7 @@ Align ARMTargetLowering::getABIAlignmentForCallingConv(
 
   // Avoid over-aligning vector parameters. It would require realigning the
   // stack and waste space for no real benefit.
-  return std::min(ABITypeAlign, DL.getStackAlignment());
+  return std::min(ABITypeAlign, Subtarget->getFrameLowering()->getStackAlign());
 }
 
 /// Return true if a type is an AAPCS-VFP homogeneous aggregate or one of
diff --git a/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp b/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp
index 563601b722c803..be25cf35166ba8 100644
--- a/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp
+++ b/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp
@@ -1189,9 +1189,9 @@ WebAssemblyTargetLowering::LowerCall(CallLoweringInfo &CLI,
   if (IsVarArg && NumBytes) {
     // For non-fixed arguments, next emit stores to store the argument values
     // to the stack buffer at the offsets computed above.
-    int FI = MF.getFrameInfo().CreateStackObject(NumBytes,
-                                                 Layout.getStackAlignment(),
-                                                 /*isSS=*/false);
+    Align StackAlign = Subtarget->getFrameLowering()->getStackAlign();
+    int FI = MF.getFrameInfo().CreateStackObject(NumBytes, StackAlign,
+                                                 /*isSpillSlot=*/false);
     unsigned ValNo = 0;
     SmallVector<SDValue, 8> Chains;
     for (SDValue Arg : drop_begin(OutVals, NumFixedArgs)) {

@llvmbot
Copy link
Member

llvmbot commented Aug 21, 2024

@llvm/pr-subscribers-backend-arm

Author: Sergei Barannikov (s-barannikov)

Changes

It is both easier to use and safer than DL.getStackAlignment() because the DataLayout method asserts if the stack alignment wasn't specified. DL.getStackAlignment() is supposed to be used by middle-end passes.


Full diff: https://github.com/llvm/llvm-project/pull/105477.diff

6 Files Affected:

  • (modified) llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp (+8-6)
  • (modified) llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp (+12-9)
  • (modified) llvm/lib/Target/AArch64/AArch64CallingConvention.cpp (+1-2)
  • (modified) llvm/lib/Target/ARM/ARMCallingConv.cpp (+4-3)
  • (modified) llvm/lib/Target/ARM/ARMISelLowering.cpp (+4-5)
  • (modified) llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp (+3-3)
diff --git a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
index bdbef20e20960d..498b37f9360090 100644
--- a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
@@ -9133,9 +9133,10 @@ LegalizerHelper::lowerMemcpy(MachineInstr &MI, Register Dst, Register Src,
     // Don't promote to an alignment that would require dynamic stack
     // realignment.
     const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
-    if (!TRI->hasStackRealignment(MF))
-      while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
-        NewAlign = NewAlign.previous();
+    if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+      const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+      NewAlign = std::min(NewAlign, TFL->getStackAlign());
+    }
 
     if (NewAlign > Alignment) {
       Alignment = NewAlign;
@@ -9241,9 +9242,10 @@ LegalizerHelper::lowerMemmove(MachineInstr &MI, Register Dst, Register Src,
     // Don't promote to an alignment that would require dynamic stack
     // realignment.
     const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
-    if (!TRI->hasStackRealignment(MF))
-      while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
-        NewAlign = NewAlign.previous();
+    if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+      const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+      NewAlign = std::min(NewAlign, TFL->getStackAlign());
+    }
 
     if (NewAlign > Alignment) {
       Alignment = NewAlign;
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
index 18a3b7bce104a7..ebfa8fb7d6395d 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
@@ -7893,9 +7893,10 @@ static SDValue getMemcpyLoadsAndStores(SelectionDAG &DAG, const SDLoc &dl,
     // realignment which may conflict with optimizations such as tail call
     // optimization.
     const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
-    if (!TRI->hasStackRealignment(MF))
-      while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
-        NewAlign = NewAlign.previous();
+    if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+      const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+      NewAlign = std::min(NewAlign, TFL->getStackAlign());
+    }
 
     if (NewAlign > Alignment) {
       // Give the stack frame object a larger alignment if needed.
@@ -8088,9 +8089,10 @@ static SDValue getMemmoveLoadsAndStores(SelectionDAG &DAG, const SDLoc &dl,
     // realignment which may conflict with optimizations such as tail call
     // optimization.
     const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
-    if (!TRI->hasStackRealignment(MF))
-      while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
-        NewAlign = NewAlign.previous();
+    if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+      const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+      NewAlign = std::min(NewAlign, TFL->getStackAlign());
+    }
 
     if (NewAlign > Alignment) {
       // Give the stack frame object a larger alignment if needed.
@@ -8206,9 +8208,10 @@ static SDValue getMemsetStores(SelectionDAG &DAG, const SDLoc &dl,
     // realignment which may conflict with optimizations such as tail call
     // optimization.
     const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
-    if (!TRI->hasStackRealignment(MF))
-      while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
-        NewAlign = NewAlign.previous();
+    if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+      const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+      NewAlign = std::min(NewAlign, TFL->getStackAlign());
+    }
 
     if (NewAlign > Alignment) {
       // Give the stack frame object a larger alignment if needed.
diff --git a/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp b/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp
index 9a804c12939c4b..1a86af1edb3755 100644
--- a/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp
+++ b/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp
@@ -209,8 +209,7 @@ static bool CC_AArch64_Custom_Block(unsigned &ValNo, MVT &ValVT, MVT &LocVT,
       State.AllocateReg(Reg);
   }
 
-  const Align StackAlign =
-      State.getMachineFunction().getDataLayout().getStackAlignment();
+  const Align StackAlign = Subtarget.getFrameLowering()->getStackAlign();
   const Align MemAlign = ArgFlags.getNonZeroMemAlign();
   Align SlotAlign = std::min(MemAlign, StackAlign);
   if (!Subtarget.isTargetDarwin())
diff --git a/llvm/lib/Target/ARM/ARMCallingConv.cpp b/llvm/lib/Target/ARM/ARMCallingConv.cpp
index 4878c73138940d..243b21ad74c74a 100644
--- a/llvm/lib/Target/ARM/ARMCallingConv.cpp
+++ b/llvm/lib/Target/ARM/ARMCallingConv.cpp
@@ -189,8 +189,9 @@ static bool CC_ARM_AAPCS_Custom_Aggregate(unsigned ValNo, MVT ValVT,
 
   // Try to allocate a contiguous block of registers, each of the correct
   // size to hold one member.
-  auto &DL = State.getMachineFunction().getDataLayout();
-  const Align StackAlign = DL.getStackAlignment();
+  const auto &Subtarget =
+      State.getMachineFunction().getSubtarget<ARMSubtarget>();
+  const Align StackAlign = Subtarget.getFrameLowering()->getStackAlign();
   const Align FirstMemberAlign(PendingMembers[0].getExtraInfo());
   Align Alignment = std::min(FirstMemberAlign, StackAlign);
 
@@ -265,7 +266,7 @@ static bool CC_ARM_AAPCS_Custom_Aggregate(unsigned ValNo, MVT ValVT,
     State.AllocateReg(Reg);
 
   // Clamp the alignment between 4 and 8.
-  if (State.getMachineFunction().getSubtarget<ARMSubtarget>().isTargetAEABI())
+  if (Subtarget.isTargetAEABI())
     Alignment = ArgFlags.getNonZeroMemAlign() <= 4 ? Align(4) : Align(8);
 
   // After the first item has been allocated, the rest are packed as tightly as
diff --git a/llvm/lib/Target/ARM/ARMISelLowering.cpp b/llvm/lib/Target/ARM/ARMISelLowering.cpp
index 1e8bb8a495e68b..e8760ef5f1692e 100644
--- a/llvm/lib/Target/ARM/ARMISelLowering.cpp
+++ b/llvm/lib/Target/ARM/ARMISelLowering.cpp
@@ -2461,7 +2461,7 @@ ARMTargetLowering::LowerCall(TargetLowering::CallLoweringInfo &CLI,
 
     // Since callee will pop argument stack as a tail call, we must keep the
     // popped size 16-byte aligned.
-    Align StackAlign = DAG.getDataLayout().getStackAlignment();
+    Align StackAlign = Subtarget->getFrameLowering()->getStackAlign();
     NumBytes = alignTo(NumBytes, StackAlign);
 
     // SPDiff will be negative if this tail call requires more space than we
@@ -4711,9 +4711,8 @@ SDValue ARMTargetLowering::LowerFormalArguments(
   if (canGuaranteeTCO(CallConv, TailCallOpt)) {
     // The only way to guarantee a tail call is if the callee restores its
     // argument area, but it must also keep the stack aligned when doing so.
-    const DataLayout &DL = DAG.getDataLayout();
-    StackArgSize = alignTo(StackArgSize, DL.getStackAlignment());
-
+    Align StackAlign = Subtarget->getFrameLowering()->getStackAlign();
+    StackArgSize = alignTo(StackArgSize, StackAlign);
     AFI->setArgumentStackToRestore(StackArgSize);
   }
   AFI->setArgumentStackSize(StackArgSize);
@@ -22030,7 +22029,7 @@ Align ARMTargetLowering::getABIAlignmentForCallingConv(
 
   // Avoid over-aligning vector parameters. It would require realigning the
   // stack and waste space for no real benefit.
-  return std::min(ABITypeAlign, DL.getStackAlignment());
+  return std::min(ABITypeAlign, Subtarget->getFrameLowering()->getStackAlign());
 }
 
 /// Return true if a type is an AAPCS-VFP homogeneous aggregate or one of
diff --git a/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp b/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp
index 563601b722c803..be25cf35166ba8 100644
--- a/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp
+++ b/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp
@@ -1189,9 +1189,9 @@ WebAssemblyTargetLowering::LowerCall(CallLoweringInfo &CLI,
   if (IsVarArg && NumBytes) {
     // For non-fixed arguments, next emit stores to store the argument values
     // to the stack buffer at the offsets computed above.
-    int FI = MF.getFrameInfo().CreateStackObject(NumBytes,
-                                                 Layout.getStackAlignment(),
-                                                 /*isSS=*/false);
+    Align StackAlign = Subtarget->getFrameLowering()->getStackAlign();
+    int FI = MF.getFrameInfo().CreateStackObject(NumBytes, StackAlign,
+                                                 /*isSpillSlot=*/false);
     unsigned ValNo = 0;
     SmallVector<SDValue, 8> Chains;
     for (SDValue Arg : drop_begin(OutVals, NumFixedArgs)) {

@arsenm
Copy link
Contributor

arsenm commented Aug 21, 2024

DL.getStackAlignment() because the DataLayout method asserts if the stack alignment wasn't specified.

This is just a bug that should be fixed? In every other context the datalayout treats fields as having default values

while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
NewAlign = NewAlign.previous();
if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we assert that the TFL value is the same as the datalayout? Preferably we wouldn't have 2, and only have the DL one?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't like having two of them either, but AIUI the one from DataLayout is more of a heuristic.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The TFL one is also another fixed constant, just set in a different place

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we assert that the TFL value is the same as the datalayout?

It appears they values are different on AMDGPU. Data layout string specifies "S32" (4 bytes), but TFL value is 16 bytes. I couldn't conclude if this is intended or an oversight.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not intended. I don't know what the DataLayout one is even used for, it should change to match

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The DataLayout one is used in similar scenarios, but in middle end passes. I.e. when promoting alloca alignment.

@s-barannikov
Copy link
Contributor Author

s-barannikov commented Aug 21, 2024

DL.getStackAlignment() because the DataLayout method asserts if the stack alignment wasn't specified.

This is just a bug that should be fixed? In every other context the datalayout treats fields as having default values

There is no default value for this one. Quoting LangRef:

If omitted, the natural stack alignment defaults to “unspecified”, which does not prevent any alignment promotions.

I'm going to change the result type of DL.getStackAlignment() to MaybeAlign in the next PR, but I though it may make sense to switch CodeGen to use TFL method in any case.

@s-barannikov s-barannikov requested a review from nikic August 21, 2024 07:39
@nikic
Copy link
Contributor

nikic commented Aug 21, 2024

This looks reasonable to me. If we're in the backend, we should query backend data structures.

Note that we also have things like the override-stack-alignment module flag, which TFL takes into account, while DL can't.

if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
NewAlign = std::min(NewAlign, TFL->getStackAlign());
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a lot of code duplication. Why not move this code into a function and use it everywhere?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the suggestion. I'll try to find a common place, or maybe you can suggest one?

@s-barannikov s-barannikov changed the title [CodeGen] Get stack alignment from TargetFrameLowering (NFCI) [CodeGen] Get stack alignment from TargetFrameLowering Aug 21, 2024
@s-barannikov
Copy link
Contributor Author

So this isn't an NFC due to two failing tests (now updated).
For i686-unknown, data layout string specifies "S128" (16-byte alignment), while FrameLowering is constructed with the default alignment of 4 bytes (not overridden by these checks).

@arsenm Can you take a look what has gone wrong with the AMDGPU test please?

@arsenm
Copy link
Contributor

arsenm commented Aug 21, 2024

Note that we also have things like the override-stack-alignment module flag, which TFL takes into account, while DL can't.

I don't think it actually does take that into account. I see it used in x86 and Mip's TargetMachine constructors, but I don't see this wired up to TFL. That also doesn't work, because the TargetFrameLowering is owned by the subtarget, and does not have the context of a particular module.

Also, this is another point where the DataLayout acts like it's this point of backend configuration when it's not really, so then we grew this other side mechanism. It would be better if backends actually respected everything in the DL string.

@s-barannikov
Copy link
Contributor Author

s-barannikov commented Aug 21, 2024

I don't think it actually does take that into account. I see it used in x86 and Mip's TargetMachine constructors, but I don't see this wired up to TFL.

TFL Subtarget is constructed with the value of the flag here

MaybeAlign(F.getParent()->getOverrideStackAlignment()),

It is then passed to TFL constructor.

@s-barannikov
Copy link
Contributor Author

s-barannikov commented Aug 21, 2024

Also, this is another point where the DataLayout acts like it's this point of backend configuration when it's not really, so then we grew this other side mechanism. It would be better if backends actually respected everything in the DL string.

For that matter DataLayout is too limited to represent all the nuances of a backend.
I had an idea that we introduce a thin class that encapsulates all the necessary information about a backend that is necessary for the frontend / middle end to function, and link it in even if the target is disabled in LLVM_TARGETS_TO_BUILD. A directory with the obscure name TargetParser is the place where I would put it.
But this is a huge amount of work so I didn't even bother filing an RFC for this.

@s-barannikov
Copy link
Contributor Author

s-barannikov commented Aug 21, 2024

So how shall I proceed?
I can close this PR and to add more changes in #105478 to preserve the current behavior and to postpone resolution of the issue.
@arsenm @nikic Would that be fine with you?

@nikic
Copy link
Contributor

nikic commented Aug 21, 2024

So how shall I proceed? I can close this PR and to add more changes in #105478 to preserve the current behavior and to postpone resolution of the issue.

That would be okay, but I think it would also be good to fix the DL / TFL mismatches that were identified here. And maybe add an assertion that they are in sync if override-stack-alignment is not used.

@s-barannikov
Copy link
Contributor Author

s-barannikov commented Aug 21, 2024

That would be okay, but I think it would also be good to fix the DL / TFL mismatches that were identified here. And maybe add an assertion that they are in sync if override-stack-alignment is not used.

I've updated #105478 and will leave this PR open so that I don't forget to address these issues when I have more time.

@s-barannikov s-barannikov marked this pull request as draft August 21, 2024 12:27
@arsenm
Copy link
Contributor

arsenm commented Aug 21, 2024

I don't think it actually does take that into account. I see it used in x86 and Mip's TargetMachine constructors, but I don't see this wired up to TFL.

TFL Subtarget is constructed with the value of the flag here

I think this is a bug. I don't believe anything in TargetMachine should depend on context IR. You can recycle the TargetMachine for different compilations. The TargetMachine owns the subtarget map, which is keyed off the target-cpu/target-features, and could be used from different modules with different flags.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants