Skip to content

[AArch64] Stop reserved registers from being saved in prolog/epilog #138448

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
May 12, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions llvm/lib/Target/AArch64/AArch64FrameLowering.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -3619,6 +3619,13 @@ void AArch64FrameLowering::determineCalleeSaves(MachineFunction &MF,
if (Reg == BasePointerReg)
SavedRegs.set(Reg);

// Don't save manually reserved registers set through +reserve-x#i,
// even for callee-saved registers, as per GCC's behavior.
if (RegInfo->isUserReservedReg(MF, Reg)) {
SavedRegs.reset(Reg);
continue;
}

bool RegUsed = SavedRegs.test(Reg);
unsigned PairedReg = AArch64::NoRegister;
const bool RegIsGPR64 = AArch64::GPR64RegClass.contains(Reg);
Expand Down
17 changes: 17 additions & 0 deletions llvm/lib/Target/AArch64/AArch64RegisterInfo.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -518,6 +518,18 @@ AArch64RegisterInfo::getStrictlyReservedRegs(const MachineFunction &MF) const {
return Reserved;
}

BitVector
AArch64RegisterInfo::getUserReservedRegs(const MachineFunction &MF) const {
BitVector Reserved(getNumRegs());
for (size_t i = 0; i < AArch64::GPR32commonRegClass.getNumRegs(); ++i) {
// ReserveXRegister is set for registers manually reserved
// through +reserve-x#i.
if (MF.getSubtarget<AArch64Subtarget>().isXRegisterReserved(i))
markSuperRegs(Reserved, AArch64::GPR32commonRegClass.getRegister(i));
}
return Reserved;
}

BitVector
AArch64RegisterInfo::getReservedRegs(const MachineFunction &MF) const {
BitVector Reserved(getNumRegs());
Expand Down Expand Up @@ -551,6 +563,11 @@ bool AArch64RegisterInfo::isReservedReg(const MachineFunction &MF,
return getReservedRegs(MF)[Reg];
}

bool AArch64RegisterInfo::isUserReservedReg(const MachineFunction &MF,
MCRegister Reg) const {
return getUserReservedRegs(MF)[Reg];
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This recomputes the entire BitVector every time we query for a callee-saved register? That seems expensive.

Copy link
Contributor Author

@yasuna-oribe yasuna-oribe May 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was fully aware of this when writing the function, but take a look at line 431, // FIXME: avoid re-calculating this every time. for getStrictlyReservedRegs, a way more expensive function returning a BitVector of registers, that is also called in the same loop. I've followed the golden rule and followed existing code.

The loop in determineCalleeSaves only loops over callee-saved regs (CSRegs) anyway, and if you insist, I can save the BitVector at the top of the function and lookup from it, but it will be inconsistent with the rest of the function that uses isReserved directly (which calls getReservedRegs which in turn calls getStrictlyReservedRegs), breaking the golden rule, or require me to overhaul things out of my scope, and I doubt it will make a difference.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess given the actual number of GPRs on AArch64 is a fixed number, it's not a big deal to loop 1000 times.

}

bool AArch64RegisterInfo::isStrictlyReservedReg(const MachineFunction &MF,
MCRegister Reg) const {
return getStrictlyReservedRegs(MF)[Reg];
Expand Down
2 changes: 2 additions & 0 deletions llvm/lib/Target/AArch64/AArch64RegisterInfo.h
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,7 @@ class AArch64RegisterInfo final : public AArch64GenRegisterInfo {
}

bool isReservedReg(const MachineFunction &MF, MCRegister Reg) const;
bool isUserReservedReg(const MachineFunction &MF, MCRegister Reg) const;
bool isStrictlyReservedReg(const MachineFunction &MF, MCRegister Reg) const;
bool isAnyArgRegReserved(const MachineFunction &MF) const;
void emitReservedArgRegCallError(const MachineFunction &MF) const;
Expand Down Expand Up @@ -93,6 +94,7 @@ class AArch64RegisterInfo final : public AArch64GenRegisterInfo {
const uint32_t *getWindowsStackProbePreservedMask() const;

BitVector getStrictlyReservedRegs(const MachineFunction &MF) const;
BitVector getUserReservedRegs(const MachineFunction &MF) const;
BitVector getReservedRegs(const MachineFunction &MF) const override;
std::optional<std::string>
explainReservedReg(const MachineFunction &MF,
Expand Down
43 changes: 43 additions & 0 deletions llvm/test/CodeGen/AArch64/reserveXreg-for-regalloc.ll
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
; RUN: llc < %s -mtriple=aarch64-unknown-linux-gnu -reserve-regs-for-regalloc=LR,FP,X28,X27,X26,X25,X24,X23,X22,X21,X20,X19,X18,X17,X16,X15,X14,X13,X12,X11,X10,X9,X8,X7,X6,X5,X4 | FileCheck %s
; RUN: llc < %s -mtriple=aarch64-unknown-linux-gnu -reserve-regs-for-regalloc=X30,X29,X28,X27,X26,X25,X24,X23,X22,X21,X20,X19,X18,X17,X16,X15,X14,X13,X12,X11,X10,X9,X8,X7,X6,X5,X4 | FileCheck %s

; LR, FP, X30 and X29 should be correctly recognized and not used.

define void @foo(i64 %v1, i64 %v2, ptr %ptr) {
; CHECK-LABEL: foo:
; CHECK: // %bb.0:
; CHECK-NEXT: sub sp, sp, #16
; CHECK-NEXT: .cfi_def_cfa_offset 16
; CHECK-NEXT: add x3, x0, x1
; CHECK-NEXT: str x3, [sp, #8] // 8-byte Folded Spill
; CHECK-NEXT: str x3, [x2, #8]
; CHECK-NEXT: ldr x3, [x2, #16]
; CHECK-NEXT: add x3, x0, x3
; CHECK-NEXT: sub x3, x3, x1
; CHECK-NEXT: str x3, [x2, #16]
; CHECK-NEXT: ldr x3, [sp, #8] // 8-byte Folded Reload
; CHECK-NEXT: str x3, [x2, #24]
; CHECK-NEXT: str x0, [x2, #32]
; CHECK-NEXT: str x1, [x2, #40]
; CHECK-NEXT: add sp, sp, #16
; CHECK-NEXT: ret
%v3 = add i64 %v1, %v2
%p1 = getelementptr i64, ptr %ptr, i64 1
store volatile i64 %v3, ptr %p1, align 8

%p2 = getelementptr i64, ptr %ptr, i64 2
%v4 = load volatile i64, ptr %p2, align 8
%v5 = add i64 %v1, %v4
%v6 = sub i64 %v5, %v2
store volatile i64 %v6, ptr %p2, align 8

%p3 = getelementptr i64, ptr %ptr, i64 3
store volatile i64 %v3, ptr %p3, align 8

%p4 = getelementptr i64, ptr %ptr, i64 4
store volatile i64 %v1, ptr %p4, align 8
%p5 = getelementptr i64, ptr %ptr, i64 5
store volatile i64 %v2, ptr %p5, align 8
ret void
}
Loading