-
Notifications
You must be signed in to change notification settings - Fork 14.3k
[CodeGen] Get stack alignment from TargetFrameLowering #105477
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
It is both easier to use and safer than DL.getStackAlignment() because the DataLayout method asserts if the stack alignment wasn't specified. DL.getStackAlignment() is supposed to be used by middle-end passes.
@llvm/pr-subscribers-backend-amdgpu @llvm/pr-subscribers-llvm-selectiondag Author: Sergei Barannikov (s-barannikov) ChangesIt is both easier to use and safer than DL.getStackAlignment() because the DataLayout method asserts if the stack alignment wasn't specified. DL.getStackAlignment() is supposed to be used by middle-end passes. Full diff: https://github.com/llvm/llvm-project/pull/105477.diff 6 Files Affected:
diff --git a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
index bdbef20e20960d..498b37f9360090 100644
--- a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
@@ -9133,9 +9133,10 @@ LegalizerHelper::lowerMemcpy(MachineInstr &MI, Register Dst, Register Src,
// Don't promote to an alignment that would require dynamic stack
// realignment.
const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
- if (!TRI->hasStackRealignment(MF))
- while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
- NewAlign = NewAlign.previous();
+ if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+ const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+ NewAlign = std::min(NewAlign, TFL->getStackAlign());
+ }
if (NewAlign > Alignment) {
Alignment = NewAlign;
@@ -9241,9 +9242,10 @@ LegalizerHelper::lowerMemmove(MachineInstr &MI, Register Dst, Register Src,
// Don't promote to an alignment that would require dynamic stack
// realignment.
const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
- if (!TRI->hasStackRealignment(MF))
- while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
- NewAlign = NewAlign.previous();
+ if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+ const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+ NewAlign = std::min(NewAlign, TFL->getStackAlign());
+ }
if (NewAlign > Alignment) {
Alignment = NewAlign;
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
index 18a3b7bce104a7..ebfa8fb7d6395d 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
@@ -7893,9 +7893,10 @@ static SDValue getMemcpyLoadsAndStores(SelectionDAG &DAG, const SDLoc &dl,
// realignment which may conflict with optimizations such as tail call
// optimization.
const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
- if (!TRI->hasStackRealignment(MF))
- while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
- NewAlign = NewAlign.previous();
+ if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+ const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+ NewAlign = std::min(NewAlign, TFL->getStackAlign());
+ }
if (NewAlign > Alignment) {
// Give the stack frame object a larger alignment if needed.
@@ -8088,9 +8089,10 @@ static SDValue getMemmoveLoadsAndStores(SelectionDAG &DAG, const SDLoc &dl,
// realignment which may conflict with optimizations such as tail call
// optimization.
const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
- if (!TRI->hasStackRealignment(MF))
- while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
- NewAlign = NewAlign.previous();
+ if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+ const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+ NewAlign = std::min(NewAlign, TFL->getStackAlign());
+ }
if (NewAlign > Alignment) {
// Give the stack frame object a larger alignment if needed.
@@ -8206,9 +8208,10 @@ static SDValue getMemsetStores(SelectionDAG &DAG, const SDLoc &dl,
// realignment which may conflict with optimizations such as tail call
// optimization.
const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
- if (!TRI->hasStackRealignment(MF))
- while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
- NewAlign = NewAlign.previous();
+ if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+ const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+ NewAlign = std::min(NewAlign, TFL->getStackAlign());
+ }
if (NewAlign > Alignment) {
// Give the stack frame object a larger alignment if needed.
diff --git a/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp b/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp
index 9a804c12939c4b..1a86af1edb3755 100644
--- a/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp
+++ b/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp
@@ -209,8 +209,7 @@ static bool CC_AArch64_Custom_Block(unsigned &ValNo, MVT &ValVT, MVT &LocVT,
State.AllocateReg(Reg);
}
- const Align StackAlign =
- State.getMachineFunction().getDataLayout().getStackAlignment();
+ const Align StackAlign = Subtarget.getFrameLowering()->getStackAlign();
const Align MemAlign = ArgFlags.getNonZeroMemAlign();
Align SlotAlign = std::min(MemAlign, StackAlign);
if (!Subtarget.isTargetDarwin())
diff --git a/llvm/lib/Target/ARM/ARMCallingConv.cpp b/llvm/lib/Target/ARM/ARMCallingConv.cpp
index 4878c73138940d..243b21ad74c74a 100644
--- a/llvm/lib/Target/ARM/ARMCallingConv.cpp
+++ b/llvm/lib/Target/ARM/ARMCallingConv.cpp
@@ -189,8 +189,9 @@ static bool CC_ARM_AAPCS_Custom_Aggregate(unsigned ValNo, MVT ValVT,
// Try to allocate a contiguous block of registers, each of the correct
// size to hold one member.
- auto &DL = State.getMachineFunction().getDataLayout();
- const Align StackAlign = DL.getStackAlignment();
+ const auto &Subtarget =
+ State.getMachineFunction().getSubtarget<ARMSubtarget>();
+ const Align StackAlign = Subtarget.getFrameLowering()->getStackAlign();
const Align FirstMemberAlign(PendingMembers[0].getExtraInfo());
Align Alignment = std::min(FirstMemberAlign, StackAlign);
@@ -265,7 +266,7 @@ static bool CC_ARM_AAPCS_Custom_Aggregate(unsigned ValNo, MVT ValVT,
State.AllocateReg(Reg);
// Clamp the alignment between 4 and 8.
- if (State.getMachineFunction().getSubtarget<ARMSubtarget>().isTargetAEABI())
+ if (Subtarget.isTargetAEABI())
Alignment = ArgFlags.getNonZeroMemAlign() <= 4 ? Align(4) : Align(8);
// After the first item has been allocated, the rest are packed as tightly as
diff --git a/llvm/lib/Target/ARM/ARMISelLowering.cpp b/llvm/lib/Target/ARM/ARMISelLowering.cpp
index 1e8bb8a495e68b..e8760ef5f1692e 100644
--- a/llvm/lib/Target/ARM/ARMISelLowering.cpp
+++ b/llvm/lib/Target/ARM/ARMISelLowering.cpp
@@ -2461,7 +2461,7 @@ ARMTargetLowering::LowerCall(TargetLowering::CallLoweringInfo &CLI,
// Since callee will pop argument stack as a tail call, we must keep the
// popped size 16-byte aligned.
- Align StackAlign = DAG.getDataLayout().getStackAlignment();
+ Align StackAlign = Subtarget->getFrameLowering()->getStackAlign();
NumBytes = alignTo(NumBytes, StackAlign);
// SPDiff will be negative if this tail call requires more space than we
@@ -4711,9 +4711,8 @@ SDValue ARMTargetLowering::LowerFormalArguments(
if (canGuaranteeTCO(CallConv, TailCallOpt)) {
// The only way to guarantee a tail call is if the callee restores its
// argument area, but it must also keep the stack aligned when doing so.
- const DataLayout &DL = DAG.getDataLayout();
- StackArgSize = alignTo(StackArgSize, DL.getStackAlignment());
-
+ Align StackAlign = Subtarget->getFrameLowering()->getStackAlign();
+ StackArgSize = alignTo(StackArgSize, StackAlign);
AFI->setArgumentStackToRestore(StackArgSize);
}
AFI->setArgumentStackSize(StackArgSize);
@@ -22030,7 +22029,7 @@ Align ARMTargetLowering::getABIAlignmentForCallingConv(
// Avoid over-aligning vector parameters. It would require realigning the
// stack and waste space for no real benefit.
- return std::min(ABITypeAlign, DL.getStackAlignment());
+ return std::min(ABITypeAlign, Subtarget->getFrameLowering()->getStackAlign());
}
/// Return true if a type is an AAPCS-VFP homogeneous aggregate or one of
diff --git a/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp b/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp
index 563601b722c803..be25cf35166ba8 100644
--- a/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp
+++ b/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp
@@ -1189,9 +1189,9 @@ WebAssemblyTargetLowering::LowerCall(CallLoweringInfo &CLI,
if (IsVarArg && NumBytes) {
// For non-fixed arguments, next emit stores to store the argument values
// to the stack buffer at the offsets computed above.
- int FI = MF.getFrameInfo().CreateStackObject(NumBytes,
- Layout.getStackAlignment(),
- /*isSS=*/false);
+ Align StackAlign = Subtarget->getFrameLowering()->getStackAlign();
+ int FI = MF.getFrameInfo().CreateStackObject(NumBytes, StackAlign,
+ /*isSpillSlot=*/false);
unsigned ValNo = 0;
SmallVector<SDValue, 8> Chains;
for (SDValue Arg : drop_begin(OutVals, NumFixedArgs)) {
|
@llvm/pr-subscribers-backend-aarch64 Author: Sergei Barannikov (s-barannikov) ChangesIt is both easier to use and safer than DL.getStackAlignment() because the DataLayout method asserts if the stack alignment wasn't specified. DL.getStackAlignment() is supposed to be used by middle-end passes. Full diff: https://github.com/llvm/llvm-project/pull/105477.diff 6 Files Affected:
diff --git a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
index bdbef20e20960d..498b37f9360090 100644
--- a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
@@ -9133,9 +9133,10 @@ LegalizerHelper::lowerMemcpy(MachineInstr &MI, Register Dst, Register Src,
// Don't promote to an alignment that would require dynamic stack
// realignment.
const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
- if (!TRI->hasStackRealignment(MF))
- while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
- NewAlign = NewAlign.previous();
+ if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+ const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+ NewAlign = std::min(NewAlign, TFL->getStackAlign());
+ }
if (NewAlign > Alignment) {
Alignment = NewAlign;
@@ -9241,9 +9242,10 @@ LegalizerHelper::lowerMemmove(MachineInstr &MI, Register Dst, Register Src,
// Don't promote to an alignment that would require dynamic stack
// realignment.
const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
- if (!TRI->hasStackRealignment(MF))
- while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
- NewAlign = NewAlign.previous();
+ if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+ const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+ NewAlign = std::min(NewAlign, TFL->getStackAlign());
+ }
if (NewAlign > Alignment) {
Alignment = NewAlign;
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
index 18a3b7bce104a7..ebfa8fb7d6395d 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
@@ -7893,9 +7893,10 @@ static SDValue getMemcpyLoadsAndStores(SelectionDAG &DAG, const SDLoc &dl,
// realignment which may conflict with optimizations such as tail call
// optimization.
const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
- if (!TRI->hasStackRealignment(MF))
- while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
- NewAlign = NewAlign.previous();
+ if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+ const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+ NewAlign = std::min(NewAlign, TFL->getStackAlign());
+ }
if (NewAlign > Alignment) {
// Give the stack frame object a larger alignment if needed.
@@ -8088,9 +8089,10 @@ static SDValue getMemmoveLoadsAndStores(SelectionDAG &DAG, const SDLoc &dl,
// realignment which may conflict with optimizations such as tail call
// optimization.
const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
- if (!TRI->hasStackRealignment(MF))
- while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
- NewAlign = NewAlign.previous();
+ if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+ const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+ NewAlign = std::min(NewAlign, TFL->getStackAlign());
+ }
if (NewAlign > Alignment) {
// Give the stack frame object a larger alignment if needed.
@@ -8206,9 +8208,10 @@ static SDValue getMemsetStores(SelectionDAG &DAG, const SDLoc &dl,
// realignment which may conflict with optimizations such as tail call
// optimization.
const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
- if (!TRI->hasStackRealignment(MF))
- while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
- NewAlign = NewAlign.previous();
+ if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+ const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+ NewAlign = std::min(NewAlign, TFL->getStackAlign());
+ }
if (NewAlign > Alignment) {
// Give the stack frame object a larger alignment if needed.
diff --git a/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp b/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp
index 9a804c12939c4b..1a86af1edb3755 100644
--- a/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp
+++ b/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp
@@ -209,8 +209,7 @@ static bool CC_AArch64_Custom_Block(unsigned &ValNo, MVT &ValVT, MVT &LocVT,
State.AllocateReg(Reg);
}
- const Align StackAlign =
- State.getMachineFunction().getDataLayout().getStackAlignment();
+ const Align StackAlign = Subtarget.getFrameLowering()->getStackAlign();
const Align MemAlign = ArgFlags.getNonZeroMemAlign();
Align SlotAlign = std::min(MemAlign, StackAlign);
if (!Subtarget.isTargetDarwin())
diff --git a/llvm/lib/Target/ARM/ARMCallingConv.cpp b/llvm/lib/Target/ARM/ARMCallingConv.cpp
index 4878c73138940d..243b21ad74c74a 100644
--- a/llvm/lib/Target/ARM/ARMCallingConv.cpp
+++ b/llvm/lib/Target/ARM/ARMCallingConv.cpp
@@ -189,8 +189,9 @@ static bool CC_ARM_AAPCS_Custom_Aggregate(unsigned ValNo, MVT ValVT,
// Try to allocate a contiguous block of registers, each of the correct
// size to hold one member.
- auto &DL = State.getMachineFunction().getDataLayout();
- const Align StackAlign = DL.getStackAlignment();
+ const auto &Subtarget =
+ State.getMachineFunction().getSubtarget<ARMSubtarget>();
+ const Align StackAlign = Subtarget.getFrameLowering()->getStackAlign();
const Align FirstMemberAlign(PendingMembers[0].getExtraInfo());
Align Alignment = std::min(FirstMemberAlign, StackAlign);
@@ -265,7 +266,7 @@ static bool CC_ARM_AAPCS_Custom_Aggregate(unsigned ValNo, MVT ValVT,
State.AllocateReg(Reg);
// Clamp the alignment between 4 and 8.
- if (State.getMachineFunction().getSubtarget<ARMSubtarget>().isTargetAEABI())
+ if (Subtarget.isTargetAEABI())
Alignment = ArgFlags.getNonZeroMemAlign() <= 4 ? Align(4) : Align(8);
// After the first item has been allocated, the rest are packed as tightly as
diff --git a/llvm/lib/Target/ARM/ARMISelLowering.cpp b/llvm/lib/Target/ARM/ARMISelLowering.cpp
index 1e8bb8a495e68b..e8760ef5f1692e 100644
--- a/llvm/lib/Target/ARM/ARMISelLowering.cpp
+++ b/llvm/lib/Target/ARM/ARMISelLowering.cpp
@@ -2461,7 +2461,7 @@ ARMTargetLowering::LowerCall(TargetLowering::CallLoweringInfo &CLI,
// Since callee will pop argument stack as a tail call, we must keep the
// popped size 16-byte aligned.
- Align StackAlign = DAG.getDataLayout().getStackAlignment();
+ Align StackAlign = Subtarget->getFrameLowering()->getStackAlign();
NumBytes = alignTo(NumBytes, StackAlign);
// SPDiff will be negative if this tail call requires more space than we
@@ -4711,9 +4711,8 @@ SDValue ARMTargetLowering::LowerFormalArguments(
if (canGuaranteeTCO(CallConv, TailCallOpt)) {
// The only way to guarantee a tail call is if the callee restores its
// argument area, but it must also keep the stack aligned when doing so.
- const DataLayout &DL = DAG.getDataLayout();
- StackArgSize = alignTo(StackArgSize, DL.getStackAlignment());
-
+ Align StackAlign = Subtarget->getFrameLowering()->getStackAlign();
+ StackArgSize = alignTo(StackArgSize, StackAlign);
AFI->setArgumentStackToRestore(StackArgSize);
}
AFI->setArgumentStackSize(StackArgSize);
@@ -22030,7 +22029,7 @@ Align ARMTargetLowering::getABIAlignmentForCallingConv(
// Avoid over-aligning vector parameters. It would require realigning the
// stack and waste space for no real benefit.
- return std::min(ABITypeAlign, DL.getStackAlignment());
+ return std::min(ABITypeAlign, Subtarget->getFrameLowering()->getStackAlign());
}
/// Return true if a type is an AAPCS-VFP homogeneous aggregate or one of
diff --git a/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp b/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp
index 563601b722c803..be25cf35166ba8 100644
--- a/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp
+++ b/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp
@@ -1189,9 +1189,9 @@ WebAssemblyTargetLowering::LowerCall(CallLoweringInfo &CLI,
if (IsVarArg && NumBytes) {
// For non-fixed arguments, next emit stores to store the argument values
// to the stack buffer at the offsets computed above.
- int FI = MF.getFrameInfo().CreateStackObject(NumBytes,
- Layout.getStackAlignment(),
- /*isSS=*/false);
+ Align StackAlign = Subtarget->getFrameLowering()->getStackAlign();
+ int FI = MF.getFrameInfo().CreateStackObject(NumBytes, StackAlign,
+ /*isSpillSlot=*/false);
unsigned ValNo = 0;
SmallVector<SDValue, 8> Chains;
for (SDValue Arg : drop_begin(OutVals, NumFixedArgs)) {
|
@llvm/pr-subscribers-backend-webassembly Author: Sergei Barannikov (s-barannikov) ChangesIt is both easier to use and safer than DL.getStackAlignment() because the DataLayout method asserts if the stack alignment wasn't specified. DL.getStackAlignment() is supposed to be used by middle-end passes. Full diff: https://github.com/llvm/llvm-project/pull/105477.diff 6 Files Affected:
diff --git a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
index bdbef20e20960d..498b37f9360090 100644
--- a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
@@ -9133,9 +9133,10 @@ LegalizerHelper::lowerMemcpy(MachineInstr &MI, Register Dst, Register Src,
// Don't promote to an alignment that would require dynamic stack
// realignment.
const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
- if (!TRI->hasStackRealignment(MF))
- while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
- NewAlign = NewAlign.previous();
+ if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+ const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+ NewAlign = std::min(NewAlign, TFL->getStackAlign());
+ }
if (NewAlign > Alignment) {
Alignment = NewAlign;
@@ -9241,9 +9242,10 @@ LegalizerHelper::lowerMemmove(MachineInstr &MI, Register Dst, Register Src,
// Don't promote to an alignment that would require dynamic stack
// realignment.
const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
- if (!TRI->hasStackRealignment(MF))
- while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
- NewAlign = NewAlign.previous();
+ if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+ const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+ NewAlign = std::min(NewAlign, TFL->getStackAlign());
+ }
if (NewAlign > Alignment) {
Alignment = NewAlign;
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
index 18a3b7bce104a7..ebfa8fb7d6395d 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
@@ -7893,9 +7893,10 @@ static SDValue getMemcpyLoadsAndStores(SelectionDAG &DAG, const SDLoc &dl,
// realignment which may conflict with optimizations such as tail call
// optimization.
const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
- if (!TRI->hasStackRealignment(MF))
- while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
- NewAlign = NewAlign.previous();
+ if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+ const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+ NewAlign = std::min(NewAlign, TFL->getStackAlign());
+ }
if (NewAlign > Alignment) {
// Give the stack frame object a larger alignment if needed.
@@ -8088,9 +8089,10 @@ static SDValue getMemmoveLoadsAndStores(SelectionDAG &DAG, const SDLoc &dl,
// realignment which may conflict with optimizations such as tail call
// optimization.
const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
- if (!TRI->hasStackRealignment(MF))
- while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
- NewAlign = NewAlign.previous();
+ if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+ const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+ NewAlign = std::min(NewAlign, TFL->getStackAlign());
+ }
if (NewAlign > Alignment) {
// Give the stack frame object a larger alignment if needed.
@@ -8206,9 +8208,10 @@ static SDValue getMemsetStores(SelectionDAG &DAG, const SDLoc &dl,
// realignment which may conflict with optimizations such as tail call
// optimization.
const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
- if (!TRI->hasStackRealignment(MF))
- while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
- NewAlign = NewAlign.previous();
+ if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+ const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+ NewAlign = std::min(NewAlign, TFL->getStackAlign());
+ }
if (NewAlign > Alignment) {
// Give the stack frame object a larger alignment if needed.
diff --git a/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp b/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp
index 9a804c12939c4b..1a86af1edb3755 100644
--- a/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp
+++ b/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp
@@ -209,8 +209,7 @@ static bool CC_AArch64_Custom_Block(unsigned &ValNo, MVT &ValVT, MVT &LocVT,
State.AllocateReg(Reg);
}
- const Align StackAlign =
- State.getMachineFunction().getDataLayout().getStackAlignment();
+ const Align StackAlign = Subtarget.getFrameLowering()->getStackAlign();
const Align MemAlign = ArgFlags.getNonZeroMemAlign();
Align SlotAlign = std::min(MemAlign, StackAlign);
if (!Subtarget.isTargetDarwin())
diff --git a/llvm/lib/Target/ARM/ARMCallingConv.cpp b/llvm/lib/Target/ARM/ARMCallingConv.cpp
index 4878c73138940d..243b21ad74c74a 100644
--- a/llvm/lib/Target/ARM/ARMCallingConv.cpp
+++ b/llvm/lib/Target/ARM/ARMCallingConv.cpp
@@ -189,8 +189,9 @@ static bool CC_ARM_AAPCS_Custom_Aggregate(unsigned ValNo, MVT ValVT,
// Try to allocate a contiguous block of registers, each of the correct
// size to hold one member.
- auto &DL = State.getMachineFunction().getDataLayout();
- const Align StackAlign = DL.getStackAlignment();
+ const auto &Subtarget =
+ State.getMachineFunction().getSubtarget<ARMSubtarget>();
+ const Align StackAlign = Subtarget.getFrameLowering()->getStackAlign();
const Align FirstMemberAlign(PendingMembers[0].getExtraInfo());
Align Alignment = std::min(FirstMemberAlign, StackAlign);
@@ -265,7 +266,7 @@ static bool CC_ARM_AAPCS_Custom_Aggregate(unsigned ValNo, MVT ValVT,
State.AllocateReg(Reg);
// Clamp the alignment between 4 and 8.
- if (State.getMachineFunction().getSubtarget<ARMSubtarget>().isTargetAEABI())
+ if (Subtarget.isTargetAEABI())
Alignment = ArgFlags.getNonZeroMemAlign() <= 4 ? Align(4) : Align(8);
// After the first item has been allocated, the rest are packed as tightly as
diff --git a/llvm/lib/Target/ARM/ARMISelLowering.cpp b/llvm/lib/Target/ARM/ARMISelLowering.cpp
index 1e8bb8a495e68b..e8760ef5f1692e 100644
--- a/llvm/lib/Target/ARM/ARMISelLowering.cpp
+++ b/llvm/lib/Target/ARM/ARMISelLowering.cpp
@@ -2461,7 +2461,7 @@ ARMTargetLowering::LowerCall(TargetLowering::CallLoweringInfo &CLI,
// Since callee will pop argument stack as a tail call, we must keep the
// popped size 16-byte aligned.
- Align StackAlign = DAG.getDataLayout().getStackAlignment();
+ Align StackAlign = Subtarget->getFrameLowering()->getStackAlign();
NumBytes = alignTo(NumBytes, StackAlign);
// SPDiff will be negative if this tail call requires more space than we
@@ -4711,9 +4711,8 @@ SDValue ARMTargetLowering::LowerFormalArguments(
if (canGuaranteeTCO(CallConv, TailCallOpt)) {
// The only way to guarantee a tail call is if the callee restores its
// argument area, but it must also keep the stack aligned when doing so.
- const DataLayout &DL = DAG.getDataLayout();
- StackArgSize = alignTo(StackArgSize, DL.getStackAlignment());
-
+ Align StackAlign = Subtarget->getFrameLowering()->getStackAlign();
+ StackArgSize = alignTo(StackArgSize, StackAlign);
AFI->setArgumentStackToRestore(StackArgSize);
}
AFI->setArgumentStackSize(StackArgSize);
@@ -22030,7 +22029,7 @@ Align ARMTargetLowering::getABIAlignmentForCallingConv(
// Avoid over-aligning vector parameters. It would require realigning the
// stack and waste space for no real benefit.
- return std::min(ABITypeAlign, DL.getStackAlignment());
+ return std::min(ABITypeAlign, Subtarget->getFrameLowering()->getStackAlign());
}
/// Return true if a type is an AAPCS-VFP homogeneous aggregate or one of
diff --git a/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp b/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp
index 563601b722c803..be25cf35166ba8 100644
--- a/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp
+++ b/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp
@@ -1189,9 +1189,9 @@ WebAssemblyTargetLowering::LowerCall(CallLoweringInfo &CLI,
if (IsVarArg && NumBytes) {
// For non-fixed arguments, next emit stores to store the argument values
// to the stack buffer at the offsets computed above.
- int FI = MF.getFrameInfo().CreateStackObject(NumBytes,
- Layout.getStackAlignment(),
- /*isSS=*/false);
+ Align StackAlign = Subtarget->getFrameLowering()->getStackAlign();
+ int FI = MF.getFrameInfo().CreateStackObject(NumBytes, StackAlign,
+ /*isSpillSlot=*/false);
unsigned ValNo = 0;
SmallVector<SDValue, 8> Chains;
for (SDValue Arg : drop_begin(OutVals, NumFixedArgs)) {
|
@llvm/pr-subscribers-backend-arm Author: Sergei Barannikov (s-barannikov) ChangesIt is both easier to use and safer than DL.getStackAlignment() because the DataLayout method asserts if the stack alignment wasn't specified. DL.getStackAlignment() is supposed to be used by middle-end passes. Full diff: https://github.com/llvm/llvm-project/pull/105477.diff 6 Files Affected:
diff --git a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
index bdbef20e20960d..498b37f9360090 100644
--- a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
@@ -9133,9 +9133,10 @@ LegalizerHelper::lowerMemcpy(MachineInstr &MI, Register Dst, Register Src,
// Don't promote to an alignment that would require dynamic stack
// realignment.
const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
- if (!TRI->hasStackRealignment(MF))
- while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
- NewAlign = NewAlign.previous();
+ if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+ const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+ NewAlign = std::min(NewAlign, TFL->getStackAlign());
+ }
if (NewAlign > Alignment) {
Alignment = NewAlign;
@@ -9241,9 +9242,10 @@ LegalizerHelper::lowerMemmove(MachineInstr &MI, Register Dst, Register Src,
// Don't promote to an alignment that would require dynamic stack
// realignment.
const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
- if (!TRI->hasStackRealignment(MF))
- while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
- NewAlign = NewAlign.previous();
+ if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+ const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+ NewAlign = std::min(NewAlign, TFL->getStackAlign());
+ }
if (NewAlign > Alignment) {
Alignment = NewAlign;
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
index 18a3b7bce104a7..ebfa8fb7d6395d 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
@@ -7893,9 +7893,10 @@ static SDValue getMemcpyLoadsAndStores(SelectionDAG &DAG, const SDLoc &dl,
// realignment which may conflict with optimizations such as tail call
// optimization.
const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
- if (!TRI->hasStackRealignment(MF))
- while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
- NewAlign = NewAlign.previous();
+ if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+ const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+ NewAlign = std::min(NewAlign, TFL->getStackAlign());
+ }
if (NewAlign > Alignment) {
// Give the stack frame object a larger alignment if needed.
@@ -8088,9 +8089,10 @@ static SDValue getMemmoveLoadsAndStores(SelectionDAG &DAG, const SDLoc &dl,
// realignment which may conflict with optimizations such as tail call
// optimization.
const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
- if (!TRI->hasStackRealignment(MF))
- while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
- NewAlign = NewAlign.previous();
+ if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+ const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+ NewAlign = std::min(NewAlign, TFL->getStackAlign());
+ }
if (NewAlign > Alignment) {
// Give the stack frame object a larger alignment if needed.
@@ -8206,9 +8208,10 @@ static SDValue getMemsetStores(SelectionDAG &DAG, const SDLoc &dl,
// realignment which may conflict with optimizations such as tail call
// optimization.
const TargetRegisterInfo *TRI = MF.getSubtarget().getRegisterInfo();
- if (!TRI->hasStackRealignment(MF))
- while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign))
- NewAlign = NewAlign.previous();
+ if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) {
+ const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering();
+ NewAlign = std::min(NewAlign, TFL->getStackAlign());
+ }
if (NewAlign > Alignment) {
// Give the stack frame object a larger alignment if needed.
diff --git a/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp b/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp
index 9a804c12939c4b..1a86af1edb3755 100644
--- a/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp
+++ b/llvm/lib/Target/AArch64/AArch64CallingConvention.cpp
@@ -209,8 +209,7 @@ static bool CC_AArch64_Custom_Block(unsigned &ValNo, MVT &ValVT, MVT &LocVT,
State.AllocateReg(Reg);
}
- const Align StackAlign =
- State.getMachineFunction().getDataLayout().getStackAlignment();
+ const Align StackAlign = Subtarget.getFrameLowering()->getStackAlign();
const Align MemAlign = ArgFlags.getNonZeroMemAlign();
Align SlotAlign = std::min(MemAlign, StackAlign);
if (!Subtarget.isTargetDarwin())
diff --git a/llvm/lib/Target/ARM/ARMCallingConv.cpp b/llvm/lib/Target/ARM/ARMCallingConv.cpp
index 4878c73138940d..243b21ad74c74a 100644
--- a/llvm/lib/Target/ARM/ARMCallingConv.cpp
+++ b/llvm/lib/Target/ARM/ARMCallingConv.cpp
@@ -189,8 +189,9 @@ static bool CC_ARM_AAPCS_Custom_Aggregate(unsigned ValNo, MVT ValVT,
// Try to allocate a contiguous block of registers, each of the correct
// size to hold one member.
- auto &DL = State.getMachineFunction().getDataLayout();
- const Align StackAlign = DL.getStackAlignment();
+ const auto &Subtarget =
+ State.getMachineFunction().getSubtarget<ARMSubtarget>();
+ const Align StackAlign = Subtarget.getFrameLowering()->getStackAlign();
const Align FirstMemberAlign(PendingMembers[0].getExtraInfo());
Align Alignment = std::min(FirstMemberAlign, StackAlign);
@@ -265,7 +266,7 @@ static bool CC_ARM_AAPCS_Custom_Aggregate(unsigned ValNo, MVT ValVT,
State.AllocateReg(Reg);
// Clamp the alignment between 4 and 8.
- if (State.getMachineFunction().getSubtarget<ARMSubtarget>().isTargetAEABI())
+ if (Subtarget.isTargetAEABI())
Alignment = ArgFlags.getNonZeroMemAlign() <= 4 ? Align(4) : Align(8);
// After the first item has been allocated, the rest are packed as tightly as
diff --git a/llvm/lib/Target/ARM/ARMISelLowering.cpp b/llvm/lib/Target/ARM/ARMISelLowering.cpp
index 1e8bb8a495e68b..e8760ef5f1692e 100644
--- a/llvm/lib/Target/ARM/ARMISelLowering.cpp
+++ b/llvm/lib/Target/ARM/ARMISelLowering.cpp
@@ -2461,7 +2461,7 @@ ARMTargetLowering::LowerCall(TargetLowering::CallLoweringInfo &CLI,
// Since callee will pop argument stack as a tail call, we must keep the
// popped size 16-byte aligned.
- Align StackAlign = DAG.getDataLayout().getStackAlignment();
+ Align StackAlign = Subtarget->getFrameLowering()->getStackAlign();
NumBytes = alignTo(NumBytes, StackAlign);
// SPDiff will be negative if this tail call requires more space than we
@@ -4711,9 +4711,8 @@ SDValue ARMTargetLowering::LowerFormalArguments(
if (canGuaranteeTCO(CallConv, TailCallOpt)) {
// The only way to guarantee a tail call is if the callee restores its
// argument area, but it must also keep the stack aligned when doing so.
- const DataLayout &DL = DAG.getDataLayout();
- StackArgSize = alignTo(StackArgSize, DL.getStackAlignment());
-
+ Align StackAlign = Subtarget->getFrameLowering()->getStackAlign();
+ StackArgSize = alignTo(StackArgSize, StackAlign);
AFI->setArgumentStackToRestore(StackArgSize);
}
AFI->setArgumentStackSize(StackArgSize);
@@ -22030,7 +22029,7 @@ Align ARMTargetLowering::getABIAlignmentForCallingConv(
// Avoid over-aligning vector parameters. It would require realigning the
// stack and waste space for no real benefit.
- return std::min(ABITypeAlign, DL.getStackAlignment());
+ return std::min(ABITypeAlign, Subtarget->getFrameLowering()->getStackAlign());
}
/// Return true if a type is an AAPCS-VFP homogeneous aggregate or one of
diff --git a/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp b/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp
index 563601b722c803..be25cf35166ba8 100644
--- a/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp
+++ b/llvm/lib/Target/WebAssembly/WebAssemblyISelLowering.cpp
@@ -1189,9 +1189,9 @@ WebAssemblyTargetLowering::LowerCall(CallLoweringInfo &CLI,
if (IsVarArg && NumBytes) {
// For non-fixed arguments, next emit stores to store the argument values
// to the stack buffer at the offsets computed above.
- int FI = MF.getFrameInfo().CreateStackObject(NumBytes,
- Layout.getStackAlignment(),
- /*isSS=*/false);
+ Align StackAlign = Subtarget->getFrameLowering()->getStackAlign();
+ int FI = MF.getFrameInfo().CreateStackObject(NumBytes, StackAlign,
+ /*isSpillSlot=*/false);
unsigned ValNo = 0;
SmallVector<SDValue, 8> Chains;
for (SDValue Arg : drop_begin(OutVals, NumFixedArgs)) {
|
This is just a bug that should be fixed? In every other context the datalayout treats fields as having default values |
while (NewAlign > Alignment && DL.exceedsNaturalStackAlignment(NewAlign)) | ||
NewAlign = NewAlign.previous(); | ||
if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) { | ||
const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we assert that the TFL value is the same as the datalayout? Preferably we wouldn't have 2, and only have the DL one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't like having two of them either, but AIUI the one from DataLayout is more of a heuristic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The TFL one is also another fixed constant, just set in a different place
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we assert that the TFL value is the same as the datalayout?
It appears they values are different on AMDGPU. Data layout string specifies "S32" (4 bytes), but TFL value is 16 bytes. I couldn't conclude if this is intended or an oversight.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not intended. I don't know what the DataLayout one is even used for, it should change to match
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The DataLayout one is used in similar scenarios, but in middle end passes. I.e. when promoting alloca alignment.
There is no default value for this one. Quoting LangRef:
I'm going to change the result type of |
This looks reasonable to me. If we're in the backend, we should query backend data structures. Note that we also have things like the |
if (NewAlign > Alignment && !TRI->hasStackRealignment(MF)) { | ||
const TargetFrameLowering *TFL = MF.getSubtarget().getFrameLowering(); | ||
NewAlign = std::min(NewAlign, TFL->getStackAlign()); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a lot of code duplication. Why not move this code into a function and use it everywhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the suggestion. I'll try to find a common place, or maybe you can suggest one?
So this isn't an NFC due to two failing tests (now updated). @arsenm Can you take a look what has gone wrong with the AMDGPU test please? |
I don't think it actually does take that into account. I see it used in x86 and Mip's TargetMachine constructors, but I don't see this wired up to TFL. That also doesn't work, because the TargetFrameLowering is owned by the subtarget, and does not have the context of a particular module. Also, this is another point where the DataLayout acts like it's this point of backend configuration when it's not really, so then we grew this other side mechanism. It would be better if backends actually respected everything in the DL string. |
It is then passed to TFL constructor. |
For that matter DataLayout is too limited to represent all the nuances of a backend. |
That would be okay, but I think it would also be good to fix the DL / TFL mismatches that were identified here. And maybe add an assertion that they are in sync if |
I've updated #105478 and will leave this PR open so that I don't forget to address these issues when I have more time. |
I think this is a bug. I don't believe anything in TargetMachine should depend on context IR. You can recycle the TargetMachine for different compilations. The TargetMachine owns the subtarget map, which is keyed off the target-cpu/target-features, and could be used from different modules with different flags. |
It is both easier to use and safer than DL.getStackAlignment() because the DataLayout method asserts if the stack alignment wasn't specified. DL.getStackAlignment() is supposed to be used by middle-end passes.