Skip to content

[RISCV] Separate the calling convention handlers into their own file. NFC #107484

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Sep 6, 2024

Conversation

topperc
Copy link
Collaborator

@topperc topperc commented Sep 5, 2024

These are used by both SelectionDAG and GlobalISel and are separate from RISCVTargetLowering.

Having a separate file is how other targets are structured. Though other targets generate most of their calling convention code through tablegen.

I moved the CC_RISV functions from the llvm::RISCV namespace to llvm::. That's what the tablegen code on other targets does and the functions already have RISCV in their name. RISCVCCAssignFn is moved from RISCVTargetLowering to the llvm namespace.

These are used by both SelectionDAG and GlobalISel and are separate
from RISCVTargetLowering.

Having a separate file is how other targets are structured. Though other
targets generate most of their calling convention code through tablegen.

I moved the CC_RISV functions from the llvm::RISCV namespace to llvm::. That's
what the tablegen code on other targets does and the functions already
have RISCV in their name.
@llvmbot
Copy link
Member

llvmbot commented Sep 5, 2024

@llvm/pr-subscribers-backend-risc-v

Author: Craig Topper (topperc)

Changes

These are used by both SelectionDAG and GlobalISel and are separate from RISCVTargetLowering.

Having a separate file is how other targets are structured. Though other targets generate most of their calling convention code through tablegen.

I moved the CC_RISV functions from the llvm::RISCV namespace to llvm::. That's what the tablegen code on other targets does and the functions already have RISCV in their name.


Patch is 73.06 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/107484.diff

6 Files Affected:

  • (modified) llvm/lib/Target/RISCV/CMakeLists.txt (+1)
  • (modified) llvm/lib/Target/RISCV/GISel/RISCVCallLowering.cpp (+14-15)
  • (added) llvm/lib/Target/RISCV/RISCVCallingConv.cpp (+684)
  • (added) llvm/lib/Target/RISCV/RISCVCallingConv.h (+51)
  • (modified) llvm/lib/Target/RISCV/RISCVISelLowering.cpp (+11-680)
  • (modified) llvm/lib/Target/RISCV/RISCVISelLowering.h (+1-30)
diff --git a/llvm/lib/Target/RISCV/CMakeLists.txt b/llvm/lib/Target/RISCV/CMakeLists.txt
index cbb4c2cedfb97e..aef0e4fbbf5847 100644
--- a/llvm/lib/Target/RISCV/CMakeLists.txt
+++ b/llvm/lib/Target/RISCV/CMakeLists.txt
@@ -29,6 +29,7 @@ add_public_tablegen_target(RISCVCommonTableGen)
 
 add_llvm_target(RISCVCodeGen
   RISCVAsmPrinter.cpp
+  RISCVCallingConv.cpp
   RISCVCodeGenPrepare.cpp
   RISCVDeadRegisterDefinitions.cpp
   RISCVMakeCompressible.cpp
diff --git a/llvm/lib/Target/RISCV/GISel/RISCVCallLowering.cpp b/llvm/lib/Target/RISCV/GISel/RISCVCallLowering.cpp
index 31a9df53a2aa1b..14832204058f88 100644
--- a/llvm/lib/Target/RISCV/GISel/RISCVCallLowering.cpp
+++ b/llvm/lib/Target/RISCV/GISel/RISCVCallLowering.cpp
@@ -13,6 +13,7 @@
 //===----------------------------------------------------------------------===//
 
 #include "RISCVCallLowering.h"
+#include "RISCVCallingConv.h"
 #include "RISCVISelLowering.h"
 #include "RISCVMachineFunctionInfo.h"
 #include "RISCVSubtarget.h"
@@ -30,14 +31,13 @@ struct RISCVOutgoingValueAssigner : public CallLowering::OutgoingValueAssigner {
   // The function used internally to assign args - we ignore the AssignFn stored
   // by OutgoingValueAssigner since RISC-V implements its CC using a custom
   // function with a different signature.
-  RISCVTargetLowering::RISCVCCAssignFn *RISCVAssignFn;
+  RISCVCCAssignFn *RISCVAssignFn;
 
   // Whether this is assigning args for a return.
   bool IsRet;
 
 public:
-  RISCVOutgoingValueAssigner(
-      RISCVTargetLowering::RISCVCCAssignFn *RISCVAssignFn_, bool IsRet)
+  RISCVOutgoingValueAssigner(RISCVCCAssignFn *RISCVAssignFn_, bool IsRet)
       : CallLowering::OutgoingValueAssigner(nullptr),
         RISCVAssignFn(RISCVAssignFn_), IsRet(IsRet) {}
 
@@ -182,14 +182,13 @@ struct RISCVIncomingValueAssigner : public CallLowering::IncomingValueAssigner {
   // The function used internally to assign args - we ignore the AssignFn stored
   // by IncomingValueAssigner since RISC-V implements its CC using a custom
   // function with a different signature.
-  RISCVTargetLowering::RISCVCCAssignFn *RISCVAssignFn;
+  RISCVCCAssignFn *RISCVAssignFn;
 
   // Whether this is assigning args from a return.
   bool IsRet;
 
 public:
-  RISCVIncomingValueAssigner(
-      RISCVTargetLowering::RISCVCCAssignFn *RISCVAssignFn_, bool IsRet)
+  RISCVIncomingValueAssigner(RISCVCCAssignFn *RISCVAssignFn_, bool IsRet)
       : CallLowering::IncomingValueAssigner(nullptr),
         RISCVAssignFn(RISCVAssignFn_), IsRet(IsRet) {}
 
@@ -425,7 +424,7 @@ bool RISCVCallLowering::lowerReturn(MachineIRBuilder &MIRBuilder,
     splitToValueTypes(OrigRetInfo, SplitRetInfos, DL, CC);
 
     RISCVOutgoingValueAssigner Assigner(
-        CC == CallingConv::Fast ? RISCV::CC_RISCV_FastCC : RISCV::CC_RISCV,
+        CC == CallingConv::Fast ? CC_RISCV_FastCC : CC_RISCV,
         /*IsRet=*/true);
     RISCVOutgoingValueHandler Handler(MIRBuilder, MF.getRegInfo(), Ret);
     if (!determineAndHandleAssignments(Handler, Assigner, SplitRetInfos,
@@ -461,9 +460,9 @@ bool RISCVCallLowering::canLowerReturn(MachineFunction &MF,
 
   for (unsigned I = 0, E = Outs.size(); I < E; ++I) {
     MVT VT = MVT::getVT(Outs[I].Ty);
-    if (RISCV::CC_RISCV(MF.getDataLayout(), ABI, I, VT, VT, CCValAssign::Full,
-                        Outs[I].Flags[0], CCInfo, /*IsFixed=*/true,
-                        /*isRet=*/true, nullptr, TLI))
+    if (CC_RISCV(MF.getDataLayout(), ABI, I, VT, VT, CCValAssign::Full,
+                 Outs[I].Flags[0], CCInfo, /*IsFixed=*/true,
+                 /*isRet=*/true, nullptr, TLI))
       return false;
   }
   return true;
@@ -576,9 +575,9 @@ bool RISCVCallLowering::lowerFormalArguments(MachineIRBuilder &MIRBuilder,
     ++Index;
   }
 
-  RISCVIncomingValueAssigner Assigner(
-      CC == CallingConv::Fast ? RISCV::CC_RISCV_FastCC : RISCV::CC_RISCV,
-      /*IsRet=*/false);
+  RISCVIncomingValueAssigner Assigner(CC == CallingConv::Fast ? CC_RISCV_FastCC
+                                                              : CC_RISCV,
+                                      /*IsRet=*/false);
   RISCVFormalArgHandler Handler(MIRBuilder, MF.getRegInfo());
 
   SmallVector<CCValAssign, 16> ArgLocs;
@@ -639,7 +638,7 @@ bool RISCVCallLowering::lowerCall(MachineIRBuilder &MIRBuilder,
   Call.addRegMask(TRI->getCallPreservedMask(MF, Info.CallConv));
 
   RISCVOutgoingValueAssigner ArgAssigner(
-      CC == CallingConv::Fast ? RISCV::CC_RISCV_FastCC : RISCV::CC_RISCV,
+      CC == CallingConv::Fast ? CC_RISCV_FastCC : CC_RISCV,
       /*IsRet=*/false);
   RISCVOutgoingValueHandler ArgHandler(MIRBuilder, MF.getRegInfo(), Call);
   if (!determineAndHandleAssignments(ArgHandler, ArgAssigner, SplitArgInfos,
@@ -667,7 +666,7 @@ bool RISCVCallLowering::lowerCall(MachineIRBuilder &MIRBuilder,
     splitToValueTypes(Info.OrigRet, SplitRetInfos, DL, CC);
 
     RISCVIncomingValueAssigner RetAssigner(
-        CC == CallingConv::Fast ? RISCV::CC_RISCV_FastCC : RISCV::CC_RISCV,
+        CC == CallingConv::Fast ? CC_RISCV_FastCC : CC_RISCV,
         /*IsRet=*/true);
     RISCVCallReturnHandler RetHandler(MIRBuilder, MF.getRegInfo(), Call);
     if (!determineAndHandleAssignments(RetHandler, RetAssigner, SplitRetInfos,
diff --git a/llvm/lib/Target/RISCV/RISCVCallingConv.cpp b/llvm/lib/Target/RISCV/RISCVCallingConv.cpp
new file mode 100644
index 00000000000000..bf6ae2d1c29102
--- /dev/null
+++ b/llvm/lib/Target/RISCV/RISCVCallingConv.cpp
@@ -0,0 +1,684 @@
+//===-- RISCVCallingConv.cpp - RISC-V Custom CC Routines ------------------===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+//
+// This file contains the custom routines for the RISC-V Calling Convention.
+//
+//===----------------------------------------------------------------------===//
+
+#include "RISCVCallingConv.h"
+#include "RISCVSubtarget.h"
+#include "llvm/IR/DataLayout.h"
+#include "llvm/MC/MCRegister.h"
+
+using namespace llvm;
+
+// Calling Convention Implementation.
+// The expectations for frontend ABI lowering vary from target to target.
+// Ideally, an LLVM frontend would be able to avoid worrying about many ABI
+// details, but this is a longer term goal. For now, we simply try to keep the
+// role of the frontend as simple and well-defined as possible. The rules can
+// be summarised as:
+// * Never split up large scalar arguments. We handle them here.
+// * If a hardfloat calling convention is being used, and the struct may be
+// passed in a pair of registers (fp+fp, int+fp), and both registers are
+// available, then pass as two separate arguments. If either the GPRs or FPRs
+// are exhausted, then pass according to the rule below.
+// * If a struct could never be passed in registers or directly in a stack
+// slot (as it is larger than 2*XLEN and the floating point rules don't
+// apply), then pass it using a pointer with the byval attribute.
+// * If a struct is less than 2*XLEN, then coerce to either a two-element
+// word-sized array or a 2*XLEN scalar (depending on alignment).
+// * The frontend can determine whether a struct is returned by reference or
+// not based on its size and fields. If it will be returned by reference, the
+// frontend must modify the prototype so a pointer with the sret annotation is
+// passed as the first argument. This is not necessary for large scalar
+// returns.
+// * Struct return values and varargs should be coerced to structs containing
+// register-size fields in the same situations they would be for fixed
+// arguments.
+
+static const MCPhysReg ArgFPR16s[] = {RISCV::F10_H, RISCV::F11_H, RISCV::F12_H,
+                                      RISCV::F13_H, RISCV::F14_H, RISCV::F15_H,
+                                      RISCV::F16_H, RISCV::F17_H};
+static const MCPhysReg ArgFPR32s[] = {RISCV::F10_F, RISCV::F11_F, RISCV::F12_F,
+                                      RISCV::F13_F, RISCV::F14_F, RISCV::F15_F,
+                                      RISCV::F16_F, RISCV::F17_F};
+static const MCPhysReg ArgFPR64s[] = {RISCV::F10_D, RISCV::F11_D, RISCV::F12_D,
+                                      RISCV::F13_D, RISCV::F14_D, RISCV::F15_D,
+                                      RISCV::F16_D, RISCV::F17_D};
+// This is an interim calling convention and it may be changed in the future.
+static const MCPhysReg ArgVRs[] = {
+    RISCV::V8,  RISCV::V9,  RISCV::V10, RISCV::V11, RISCV::V12, RISCV::V13,
+    RISCV::V14, RISCV::V15, RISCV::V16, RISCV::V17, RISCV::V18, RISCV::V19,
+    RISCV::V20, RISCV::V21, RISCV::V22, RISCV::V23};
+static const MCPhysReg ArgVRM2s[] = {RISCV::V8M2,  RISCV::V10M2, RISCV::V12M2,
+                                     RISCV::V14M2, RISCV::V16M2, RISCV::V18M2,
+                                     RISCV::V20M2, RISCV::V22M2};
+static const MCPhysReg ArgVRM4s[] = {RISCV::V8M4, RISCV::V12M4, RISCV::V16M4,
+                                     RISCV::V20M4};
+static const MCPhysReg ArgVRM8s[] = {RISCV::V8M8, RISCV::V16M8};
+static const MCPhysReg ArgVRN2M1s[] = {
+    RISCV::V8_V9,   RISCV::V9_V10,  RISCV::V10_V11, RISCV::V11_V12,
+    RISCV::V12_V13, RISCV::V13_V14, RISCV::V14_V15, RISCV::V15_V16,
+    RISCV::V16_V17, RISCV::V17_V18, RISCV::V18_V19, RISCV::V19_V20,
+    RISCV::V20_V21, RISCV::V21_V22, RISCV::V22_V23};
+static const MCPhysReg ArgVRN3M1s[] = {
+    RISCV::V8_V9_V10,   RISCV::V9_V10_V11,  RISCV::V10_V11_V12,
+    RISCV::V11_V12_V13, RISCV::V12_V13_V14, RISCV::V13_V14_V15,
+    RISCV::V14_V15_V16, RISCV::V15_V16_V17, RISCV::V16_V17_V18,
+    RISCV::V17_V18_V19, RISCV::V18_V19_V20, RISCV::V19_V20_V21,
+    RISCV::V20_V21_V22, RISCV::V21_V22_V23};
+static const MCPhysReg ArgVRN4M1s[] = {
+    RISCV::V8_V9_V10_V11,   RISCV::V9_V10_V11_V12,  RISCV::V10_V11_V12_V13,
+    RISCV::V11_V12_V13_V14, RISCV::V12_V13_V14_V15, RISCV::V13_V14_V15_V16,
+    RISCV::V14_V15_V16_V17, RISCV::V15_V16_V17_V18, RISCV::V16_V17_V18_V19,
+    RISCV::V17_V18_V19_V20, RISCV::V18_V19_V20_V21, RISCV::V19_V20_V21_V22,
+    RISCV::V20_V21_V22_V23};
+static const MCPhysReg ArgVRN5M1s[] = {
+    RISCV::V8_V9_V10_V11_V12,   RISCV::V9_V10_V11_V12_V13,
+    RISCV::V10_V11_V12_V13_V14, RISCV::V11_V12_V13_V14_V15,
+    RISCV::V12_V13_V14_V15_V16, RISCV::V13_V14_V15_V16_V17,
+    RISCV::V14_V15_V16_V17_V18, RISCV::V15_V16_V17_V18_V19,
+    RISCV::V16_V17_V18_V19_V20, RISCV::V17_V18_V19_V20_V21,
+    RISCV::V18_V19_V20_V21_V22, RISCV::V19_V20_V21_V22_V23};
+static const MCPhysReg ArgVRN6M1s[] = {
+    RISCV::V8_V9_V10_V11_V12_V13,   RISCV::V9_V10_V11_V12_V13_V14,
+    RISCV::V10_V11_V12_V13_V14_V15, RISCV::V11_V12_V13_V14_V15_V16,
+    RISCV::V12_V13_V14_V15_V16_V17, RISCV::V13_V14_V15_V16_V17_V18,
+    RISCV::V14_V15_V16_V17_V18_V19, RISCV::V15_V16_V17_V18_V19_V20,
+    RISCV::V16_V17_V18_V19_V20_V21, RISCV::V17_V18_V19_V20_V21_V22,
+    RISCV::V18_V19_V20_V21_V22_V23};
+static const MCPhysReg ArgVRN7M1s[] = {
+    RISCV::V8_V9_V10_V11_V12_V13_V14,   RISCV::V9_V10_V11_V12_V13_V14_V15,
+    RISCV::V10_V11_V12_V13_V14_V15_V16, RISCV::V11_V12_V13_V14_V15_V16_V17,
+    RISCV::V12_V13_V14_V15_V16_V17_V18, RISCV::V13_V14_V15_V16_V17_V18_V19,
+    RISCV::V14_V15_V16_V17_V18_V19_V20, RISCV::V15_V16_V17_V18_V19_V20_V21,
+    RISCV::V16_V17_V18_V19_V20_V21_V22, RISCV::V17_V18_V19_V20_V21_V22_V23};
+static const MCPhysReg ArgVRN8M1s[] = {RISCV::V8_V9_V10_V11_V12_V13_V14_V15,
+                                       RISCV::V9_V10_V11_V12_V13_V14_V15_V16,
+                                       RISCV::V10_V11_V12_V13_V14_V15_V16_V17,
+                                       RISCV::V11_V12_V13_V14_V15_V16_V17_V18,
+                                       RISCV::V12_V13_V14_V15_V16_V17_V18_V19,
+                                       RISCV::V13_V14_V15_V16_V17_V18_V19_V20,
+                                       RISCV::V14_V15_V16_V17_V18_V19_V20_V21,
+                                       RISCV::V15_V16_V17_V18_V19_V20_V21_V22,
+                                       RISCV::V16_V17_V18_V19_V20_V21_V22_V23};
+static const MCPhysReg ArgVRN2M2s[] = {RISCV::V8M2_V10M2,  RISCV::V10M2_V12M2,
+                                       RISCV::V12M2_V14M2, RISCV::V14M2_V16M2,
+                                       RISCV::V16M2_V18M2, RISCV::V18M2_V20M2,
+                                       RISCV::V20M2_V22M2};
+static const MCPhysReg ArgVRN3M2s[] = {
+    RISCV::V8M2_V10M2_V12M2,  RISCV::V10M2_V12M2_V14M2,
+    RISCV::V12M2_V14M2_V16M2, RISCV::V14M2_V16M2_V18M2,
+    RISCV::V16M2_V18M2_V20M2, RISCV::V18M2_V20M2_V22M2};
+static const MCPhysReg ArgVRN4M2s[] = {
+    RISCV::V8M2_V10M2_V12M2_V14M2, RISCV::V10M2_V12M2_V14M2_V16M2,
+    RISCV::V12M2_V14M2_V16M2_V18M2, RISCV::V14M2_V16M2_V18M2_V20M2,
+    RISCV::V16M2_V18M2_V20M2_V22M2};
+static const MCPhysReg ArgVRN2M4s[] = {RISCV::V8M4_V12M4, RISCV::V12M4_V16M4,
+                                       RISCV::V16M4_V20M4};
+
+ArrayRef<MCPhysReg> RISCV::getArgGPRs(const RISCVABI::ABI ABI) {
+  // The GPRs used for passing arguments in the ILP32* and LP64* ABIs, except
+  // the ILP32E ABI.
+  static const MCPhysReg ArgIGPRs[] = {RISCV::X10, RISCV::X11, RISCV::X12,
+                                       RISCV::X13, RISCV::X14, RISCV::X15,
+                                       RISCV::X16, RISCV::X17};
+  // The GPRs used for passing arguments in the ILP32E/ILP64E ABI.
+  static const MCPhysReg ArgEGPRs[] = {RISCV::X10, RISCV::X11, RISCV::X12,
+                                       RISCV::X13, RISCV::X14, RISCV::X15};
+
+  if (ABI == RISCVABI::ABI_ILP32E || ABI == RISCVABI::ABI_LP64E)
+    return ArrayRef(ArgEGPRs);
+
+  return ArrayRef(ArgIGPRs);
+}
+
+static ArrayRef<MCPhysReg> getFastCCArgGPRs(const RISCVABI::ABI ABI) {
+  // The GPRs used for passing arguments in the FastCC, X5 and X6 might be used
+  // for save-restore libcall, so we don't use them.
+  // Don't use X7 for fastcc, since Zicfilp uses X7 as the label register.
+  static const MCPhysReg FastCCIGPRs[] = {
+      RISCV::X10, RISCV::X11, RISCV::X12, RISCV::X13, RISCV::X14, RISCV::X15,
+      RISCV::X16, RISCV::X17, RISCV::X28, RISCV::X29, RISCV::X30, RISCV::X31};
+
+  // The GPRs used for passing arguments in the FastCC when using ILP32E/ILP64E.
+  static const MCPhysReg FastCCEGPRs[] = {RISCV::X10, RISCV::X11, RISCV::X12,
+                                          RISCV::X13, RISCV::X14, RISCV::X15};
+
+  if (ABI == RISCVABI::ABI_ILP32E || ABI == RISCVABI::ABI_LP64E)
+    return ArrayRef(FastCCEGPRs);
+
+  return ArrayRef(FastCCIGPRs);
+}
+
+// Pass a 2*XLEN argument that has been split into two XLEN values through
+// registers or the stack as necessary.
+static bool CC_RISCVAssign2XLen(unsigned XLen, CCState &State, CCValAssign VA1,
+                                ISD::ArgFlagsTy ArgFlags1, unsigned ValNo2,
+                                MVT ValVT2, MVT LocVT2,
+                                ISD::ArgFlagsTy ArgFlags2, bool EABI) {
+  unsigned XLenInBytes = XLen / 8;
+  const RISCVSubtarget &STI =
+      State.getMachineFunction().getSubtarget<RISCVSubtarget>();
+  ArrayRef<MCPhysReg> ArgGPRs = RISCV::getArgGPRs(STI.getTargetABI());
+
+  if (MCRegister Reg = State.AllocateReg(ArgGPRs)) {
+    // At least one half can be passed via register.
+    State.addLoc(CCValAssign::getReg(VA1.getValNo(), VA1.getValVT(), Reg,
+                                     VA1.getLocVT(), CCValAssign::Full));
+  } else {
+    // Both halves must be passed on the stack, with proper alignment.
+    // TODO: To be compatible with GCC's behaviors, we force them to have 4-byte
+    // alignment. This behavior may be changed when RV32E/ILP32E is ratified.
+    Align StackAlign(XLenInBytes);
+    if (!EABI || XLen != 32)
+      StackAlign = std::max(StackAlign, ArgFlags1.getNonZeroOrigAlign());
+    State.addLoc(
+        CCValAssign::getMem(VA1.getValNo(), VA1.getValVT(),
+                            State.AllocateStack(XLenInBytes, StackAlign),
+                            VA1.getLocVT(), CCValAssign::Full));
+    State.addLoc(CCValAssign::getMem(
+        ValNo2, ValVT2, State.AllocateStack(XLenInBytes, Align(XLenInBytes)),
+        LocVT2, CCValAssign::Full));
+    return false;
+  }
+
+  if (MCRegister Reg = State.AllocateReg(ArgGPRs)) {
+    // The second half can also be passed via register.
+    State.addLoc(
+        CCValAssign::getReg(ValNo2, ValVT2, Reg, LocVT2, CCValAssign::Full));
+  } else {
+    // The second half is passed via the stack, without additional alignment.
+    State.addLoc(CCValAssign::getMem(
+        ValNo2, ValVT2, State.AllocateStack(XLenInBytes, Align(XLenInBytes)),
+        LocVT2, CCValAssign::Full));
+  }
+
+  return false;
+}
+
+static MCRegister allocateRVVReg(MVT ValVT, unsigned ValNo, CCState &State,
+                                 const RISCVTargetLowering &TLI) {
+  const TargetRegisterClass *RC = TLI.getRegClassFor(ValVT);
+  if (RC == &RISCV::VRRegClass) {
+    // Assign the first mask argument to V0.
+    // This is an interim calling convention and it may be changed in the
+    // future.
+    if (ValVT.getVectorElementType() == MVT::i1)
+      if (MCRegister Reg = State.AllocateReg(RISCV::V0))
+        return Reg;
+    return State.AllocateReg(ArgVRs);
+  }
+  if (RC == &RISCV::VRM2RegClass)
+    return State.AllocateReg(ArgVRM2s);
+  if (RC == &RISCV::VRM4RegClass)
+    return State.AllocateReg(ArgVRM4s);
+  if (RC == &RISCV::VRM8RegClass)
+    return State.AllocateReg(ArgVRM8s);
+  if (RC == &RISCV::VRN2M1RegClass)
+    return State.AllocateReg(ArgVRN2M1s);
+  if (RC == &RISCV::VRN3M1RegClass)
+    return State.AllocateReg(ArgVRN3M1s);
+  if (RC == &RISCV::VRN4M1RegClass)
+    return State.AllocateReg(ArgVRN4M1s);
+  if (RC == &RISCV::VRN5M1RegClass)
+    return State.AllocateReg(ArgVRN5M1s);
+  if (RC == &RISCV::VRN6M1RegClass)
+    return State.AllocateReg(ArgVRN6M1s);
+  if (RC == &RISCV::VRN7M1RegClass)
+    return State.AllocateReg(ArgVRN7M1s);
+  if (RC == &RISCV::VRN8M1RegClass)
+    return State.AllocateReg(ArgVRN8M1s);
+  if (RC == &RISCV::VRN2M2RegClass)
+    return State.AllocateReg(ArgVRN2M2s);
+  if (RC == &RISCV::VRN3M2RegClass)
+    return State.AllocateReg(ArgVRN3M2s);
+  if (RC == &RISCV::VRN4M2RegClass)
+    return State.AllocateReg(ArgVRN4M2s);
+  if (RC == &RISCV::VRN2M4RegClass)
+    return State.AllocateReg(ArgVRN2M4s);
+  llvm_unreachable("Unhandled register class for ValueType");
+}
+
+// Implements the RISC-V calling convention. Returns true upon failure.
+bool llvm::CC_RISCV(const DataLayout &DL, RISCVABI::ABI ABI, unsigned ValNo,
+                    MVT ValVT, MVT LocVT, CCValAssign::LocInfo LocInfo,
+                    ISD::ArgFlagsTy ArgFlags, CCState &State, bool IsFixed,
+                    bool IsRet, Type *OrigTy, const RISCVTargetLowering &TLI) {
+  unsigned XLen = DL.getLargestLegalIntTypeSizeInBits();
+  assert(XLen == 32 || XLen == 64);
+  MVT XLenVT = XLen == 32 ? MVT::i32 : MVT::i64;
+
+  // Static chain parameter must not be passed in normal argument registers,
+  // so we assign t2 for it as done in GCC's __builtin_call_with_static_chain
+  if (ArgFlags.isNest()) {
+    if (MCRegister Reg = State.AllocateReg(RISCV::X7)) {
+      State.addLoc(CCValAssign::getReg(ValNo, ValVT, Reg, LocVT, LocInfo));
+      return false;
+    }
+  }
+
+  // Any return value split in to more than two values can't be returned
+  // directly. Vectors are returned via the available vector registers.
+  if (!LocVT.isVector() && IsRet && ValNo > 1)
+    return true;
+
+  // UseGPRForF16_F32 if targeting one of the soft-float ABIs, if passing a
+  // variadic argument, or if no F16/F32 argument registers are available.
+  bool UseGPRForF16_F32 = true;
+  // UseGPRForF64 if targeting soft-float ABIs or an FLEN=32 ABI, if passing a
+  // variadic argument, or if no F64 argument registers are available.
+  bool UseGPRForF64 = true;
+
+  switch (ABI) {
+  default:
+    llvm_unreachable("Unexpected ABI");
+  case RISCVABI::ABI_ILP32:
+  case RISCVABI::ABI_ILP32E:
+  case RISCVABI::ABI_LP64:
+  case RISCVABI::ABI_LP64E:
+    break;
+  case RISCVABI::ABI_ILP32F:
+  case RISCVABI::ABI_LP64F:
+    UseGPRForF16_F32 = !IsFixed;
+    break;
+  case RISCVABI::ABI_ILP32D:
+  case RISCVABI::ABI_LP64D:
+    UseGPRForF16_F32 = !IsFixed;
+    UseGPRForF64 = !IsFixed;
+    break;
+  }
+
+  // FPR16, FPR32, a...
[truncated]

@topperc topperc requested a review from 4vtomat September 5, 2024 23:25
@4vtomat
Copy link
Member

4vtomat commented Sep 6, 2024

Make sense to me. Is it an NFC?

@topperc topperc changed the title [RISCV] Separate the calling convention handlers into their own file. [RISCV] Separate the calling convention handlers into their own file. NFC Sep 6, 2024
@topperc
Copy link
Collaborator Author

topperc commented Sep 6, 2024

Make sense to me. Is it an NFC?

Yes. Forgot to put it in the title.

Copy link
Contributor

@wangpc-pp wangpc-pp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@topperc topperc merged commit 093b8bf into llvm:main Sep 6, 2024
10 checks passed
@topperc topperc deleted the pr/calling-conv-separate branch September 6, 2024 05:29
@llvm-ci
Copy link
Collaborator

llvm-ci commented Sep 6, 2024

LLVM Buildbot has detected a new failure on builder lld-x86_64-win running on as-worker-93 while building llvm at step 7 "test-build-unified-tree-check-all".

Full details are available at: https://lab.llvm.org/buildbot/#/builders/146/builds/1081

Here is the relevant piece of the build log for the reference
Step 7 (test-build-unified-tree-check-all) failure: test (failure)
******************** TEST 'LLVM-Unit :: Support/./SupportTests.exe/43/86' FAILED ********************
Script(shard):
--
GTEST_OUTPUT=json:C:\a\lld-x86_64-win\build\unittests\Support\.\SupportTests.exe-LLVM-Unit-21412-43-86.json GTEST_SHUFFLE=0 GTEST_TOTAL_SHARDS=86 GTEST_SHARD_INDEX=43 C:\a\lld-x86_64-win\build\unittests\Support\.\SupportTests.exe
--

Script:
--
C:\a\lld-x86_64-win\build\unittests\Support\.\SupportTests.exe --gtest_filter=ProgramEnvTest.CreateProcessLongPath
--
C:\a\lld-x86_64-win\llvm-project\llvm\unittests\Support\ProgramTest.cpp(160): error: Expected equality of these values:
  0
  RC
    Which is: -2

C:\a\lld-x86_64-win\llvm-project\llvm\unittests\Support\ProgramTest.cpp(163): error: fs::remove(Twine(LongPath)): did not return errc::success.
error number: 13
error message: permission denied



C:\a\lld-x86_64-win\llvm-project\llvm\unittests\Support\ProgramTest.cpp:160
Expected equality of these values:
  0
  RC
    Which is: -2

C:\a\lld-x86_64-win\llvm-project\llvm\unittests\Support\ProgramTest.cpp:163
fs::remove(Twine(LongPath)): did not return errc::success.
error number: 13
error message: permission denied




********************


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants