-
Notifications
You must be signed in to change notification settings - Fork 14.3k
[Flang] LoongArch64 support for BIND(C) derived types in mabi=lp64d. #117108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This patch supports both the passing and returning of BIND(C) type parameters. Reference ABI: https://github.com/loongson/la-abi-specs/blob/release/lapcs.adoc#subroutine-calling-sequence
@llvm/pr-subscribers-flang-driver @llvm/pr-subscribers-flang-codegen Author: Zhaoxin Yang (ylzsx) ChangesThis patch supports both the passing and returning of BIND(C) type parameters. Reference ABI: Patch is 40.24 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/117108.diff 4 Files Affected:
diff --git a/flang/lib/Optimizer/CodeGen/Target.cpp b/flang/lib/Optimizer/CodeGen/Target.cpp
index 9ec055b1aecabb..90ce51552c687f 100644
--- a/flang/lib/Optimizer/CodeGen/Target.cpp
+++ b/flang/lib/Optimizer/CodeGen/Target.cpp
@@ -1081,6 +1081,9 @@ struct TargetLoongArch64 : public GenericTarget<TargetLoongArch64> {
using GenericTarget::GenericTarget;
static constexpr int defaultWidth = 64;
+ static constexpr int GRLen = defaultWidth; /* eight bytes */
+ static constexpr int GRLenInChar = GRLen / 8;
+ static constexpr int FRLen = defaultWidth; /* eight bytes */
CodeGenSpecifics::Marshalling
complexArgumentType(mlir::Location loc, mlir::Type eleTy) const override {
@@ -1151,6 +1154,311 @@ struct TargetLoongArch64 : public GenericTarget<TargetLoongArch64> {
return GenericTarget::integerArgumentType(loc, argTy);
}
+
+ /// Flatten non-basic types, resulting in an array of types containing only
+ /// `IntegerType` and `FloatType`.
+ std::vector<mlir::Type> flattenTypeList(mlir::Location loc,
+ const mlir::Type type) const {
+ std::vector<mlir::Type> flatTypes;
+
+ llvm::TypeSwitch<mlir::Type>(type)
+ .template Case<mlir::IntegerType>([&](mlir::IntegerType intTy) {
+ if (intTy.getWidth() != 0)
+ flatTypes.push_back(intTy);
+ })
+ .template Case<mlir::FloatType>([&](mlir::FloatType floatTy) {
+ if (floatTy.getWidth() != 0)
+ flatTypes.push_back(floatTy);
+ })
+ .template Case<mlir::ComplexType>([&](mlir::ComplexType cmplx) {
+ const auto *sem = &floatToSemantics(kindMap, cmplx.getElementType());
+ if (sem == &llvm::APFloat::IEEEsingle() ||
+ sem == &llvm::APFloat::IEEEdouble() ||
+ sem == &llvm::APFloat::IEEEquad())
+ std::fill_n(std::back_inserter(flatTypes), 2,
+ cmplx.getElementType());
+ else
+ TODO(loc, "unsupported complx type(not IEEEsingle, IEEEdouble, "
+ "IEEEquad) as a structure component for BIND(C), "
+ "VALUE derived type argument and type return");
+ })
+ .template Case<fir::LogicalType>([&](fir::LogicalType logicalTy) {
+ const auto width = kindMap.getLogicalBitsize(logicalTy.getFKind());
+ if (width != 0)
+ flatTypes.push_back(
+ mlir::IntegerType::get(type.getContext(), width));
+ })
+ .template Case<fir::CharacterType>([&](fir::CharacterType charTy) {
+ flatTypes.push_back(mlir::IntegerType::get(type.getContext(), 8));
+ })
+ .template Case<fir::SequenceType>([&](fir::SequenceType seqTy) {
+ if (!seqTy.hasDynamicExtents()) {
+ std::size_t numOfEle = seqTy.getConstantArraySize();
+ auto eleTy = seqTy.getEleTy();
+ if (!mlir::isa<mlir::IntegerType, mlir::FloatType>(eleTy)) {
+ auto subTypeList = flattenTypeList(loc, eleTy);
+ if (subTypeList.size() != 0)
+ for (std::size_t i = 0; i < numOfEle; ++i)
+ llvm::copy(subTypeList, std::back_inserter(flatTypes));
+ } else {
+ std::fill_n(std::back_inserter(flatTypes), numOfEle, eleTy);
+ }
+ } else
+ TODO(loc, "unsupported dynamic extent sequence type as a structure "
+ "component for BIND(C), "
+ "VALUE derived type argument and type return");
+ })
+ .template Case<fir::RecordType>([&](fir::RecordType recTy) {
+ for (auto component : recTy.getTypeList()) {
+ mlir::Type eleTy = component.second;
+ auto subTypeList = flattenTypeList(loc, eleTy);
+ if (subTypeList.size() != 0)
+ llvm::copy(subTypeList, std::back_inserter(flatTypes));
+ }
+ })
+ .template Case<fir::VectorType>([&](fir::VectorType vecTy) {
+ std::size_t numOfEle = vecTy.getLen();
+ auto eleTy = vecTy.getEleTy();
+ if (!(mlir::isa<mlir::IntegerType, mlir::FloatType>(eleTy))) {
+ auto subTypeList = flattenTypeList(loc, eleTy);
+ if (subTypeList.size() != 0)
+ for (std::size_t i = 0; i < numOfEle; ++i)
+ llvm::copy(subTypeList, std::back_inserter(flatTypes));
+ } else {
+ std::fill_n(std::back_inserter(flatTypes), numOfEle, eleTy);
+ }
+ })
+ .Default([&](mlir::Type ty) {
+ if (fir::conformsWithPassByRef(ty))
+ flatTypes.push_back(
+ mlir::IntegerType::get(type.getContext(), GRLen));
+ else
+ TODO(loc, "unsupported component type for BIND(C), VALUE derived "
+ "type argument and type return");
+ });
+
+ return flatTypes;
+ }
+
+ /// Determine if a struct is eligible to be passed in FARs (and GARs) (i.e.,
+ /// when flattened it contains a single fp value, fp+fp, or int+fp of
+ /// appropriate size).
+ bool detectFARsEligibleStruct(mlir::Location loc, fir::RecordType recTy,
+ mlir::Type &Field1Ty,
+ mlir::Type &Field2Ty) const {
+
+ Field1Ty = Field2Ty = nullptr;
+ auto flatTypes = flattenTypeList(loc, recTy);
+ size_t flatSize = flatTypes.size();
+
+ // Cannot be eligible if the number of flattened types is equal to 0 or
+ // greater than 2.
+ if (flatSize == 0 || flatSize > 2)
+ return false;
+
+ bool isFirstAvaliableFloat = false;
+
+ assert((mlir::isa<mlir::IntegerType, mlir::FloatType>(flatTypes[0])) &&
+ "Type must be int or float after flattening");
+ if (auto floatTy = mlir::dyn_cast<mlir::FloatType>(flatTypes[0])) {
+ auto Size = floatTy.getWidth();
+ // Can't be eligible if larger than the FP registers. Half precision isn't
+ // currently supported on LoongArch and the ABI hasn't been confirmed, so
+ // default to the integer ABI in that case.
+ if (Size > FRLen || Size < 32)
+ return false;
+ isFirstAvaliableFloat = true;
+ Field1Ty = floatTy;
+ } else if (auto intTy = mlir::dyn_cast<mlir::IntegerType>(flatTypes[0])) {
+ if (intTy.getWidth() > GRLen)
+ return false;
+ Field1Ty = intTy;
+ }
+
+ // flatTypes has two elements
+ if (flatSize == 2) {
+ assert((mlir::isa<mlir::IntegerType, mlir::FloatType>(flatTypes[1])) &&
+ "Type must be integer or float after flattening");
+ if (auto floatTy = mlir::dyn_cast<mlir::FloatType>(flatTypes[1])) {
+ auto Size = floatTy.getWidth();
+ if (Size > FRLen || Size < 32)
+ return false;
+ Field2Ty = floatTy;
+ return true;
+ } else if (auto intTy = mlir::dyn_cast<mlir::IntegerType>(flatTypes[1])) {
+ // Can't be eligible if an integer type was already found (int+int pairs
+ // are not eligible).
+ if (!isFirstAvaliableFloat)
+ return false;
+ if (intTy.getWidth() > GRLen)
+ return false;
+ Field2Ty = intTy;
+ return true;
+ }
+ }
+
+ // return isFirstAvaliableFloat if flatTypes only has one element
+ return isFirstAvaliableFloat;
+ }
+
+ bool checkTypehasEnoughReg(mlir::Location loc, int &GARsLeft, int &FARsLeft,
+ const mlir::Type type) const {
+ if (type == nullptr)
+ return true;
+
+ llvm::TypeSwitch<mlir::Type>(type)
+ .template Case<mlir::IntegerType>([&](mlir::IntegerType intTy) {
+ const auto width = intTy.getWidth();
+ assert(width <= 128 &&
+ "integer type with width more than 128 bits is unexpected");
+ if (width == 0)
+ return;
+ if (width <= GRLen)
+ --GARsLeft;
+ else if (width <= 2 * GRLen)
+ GARsLeft = GARsLeft - 2;
+ })
+ .template Case<mlir::FloatType>([&](mlir::FloatType floatTy) {
+ const auto width = floatTy.getWidth();
+ assert(width <= 128 &&
+ "float type with width more than 128 bits is unexpected");
+ if (width == 0)
+ return;
+ if (width == 32 || width == 64)
+ --FARsLeft;
+ else if (width <= GRLen)
+ --GARsLeft;
+ else if (width <= 2 * GRLen)
+ GARsLeft = GARsLeft - 2;
+ })
+ .Default([&](mlir::Type ty) {
+ if (fir::conformsWithPassByRef(ty))
+ --GARsLeft; // Pointers.
+ else
+ TODO(loc, "unsupported component type for BIND(C), VALUE derived "
+ "type argument and type return");
+ });
+
+ return GARsLeft >= 0 && FARsLeft >= 0;
+ }
+
+ bool hasEnoughRegisters(mlir::Location loc, int GARsLeft, int FARsLeft,
+ const Marshalling &previousArguments,
+ const mlir::Type &Field1Ty,
+ const mlir::Type &Field2Ty) const {
+
+ for (auto typeAndAttr : previousArguments) {
+ const auto &attr = std::get<Attributes>(typeAndAttr);
+ if (attr.isByVal()) {
+ // Previous argument passed on the stack, and its address is passed in
+ // GAR.
+ --GARsLeft;
+ continue;
+ }
+
+ // Previous aggregate arguments were marshalled into simpler arguments.
+ const auto &type = std::get<mlir::Type>(typeAndAttr);
+ auto flatTypes = flattenTypeList(loc, type);
+
+ for (auto &flatTy : flatTypes) {
+ if (!checkTypehasEnoughReg(loc, GARsLeft, FARsLeft, flatTy))
+ return false;
+ }
+ }
+
+ if (!checkTypehasEnoughReg(loc, GARsLeft, FARsLeft, Field1Ty))
+ return false;
+ if (!checkTypehasEnoughReg(loc, GARsLeft, FARsLeft, Field2Ty))
+ return false;
+ return true;
+ }
+
+ /// LoongArch64 subroutine calling sequence ABI in:
+ /// https://github.com/loongson/la-abi-specs/blob/release/lapcs.adoc#subroutine-calling-sequence
+ CodeGenSpecifics::Marshalling
+ classifyStruct(mlir::Location loc, fir::RecordType recTy, int GARsLeft,
+ int FARsLeft, bool isResult,
+ const Marshalling &previousArguments) const {
+ CodeGenSpecifics::Marshalling marshal;
+
+ auto [recSize, recAlign] = fir::getTypeSizeAndAlignmentOrCrash(
+ loc, recTy, getDataLayout(), kindMap);
+ auto context = recTy.getContext();
+
+ if (recSize == 0) {
+ TODO(loc, "unsupported empty struct type for BIND(C), "
+ "VALUE derived type argument and type return");
+ }
+
+ if (recSize > 2 * GRLenInChar) {
+ marshal.emplace_back(
+ fir::ReferenceType::get(recTy),
+ AT{recAlign, /*byval=*/!isResult, /*sret=*/isResult});
+ return marshal;
+ }
+
+ // Pass by FARs(and GARs)
+ mlir::Type Field1Ty = nullptr, Field2Ty = nullptr;
+ if (detectFARsEligibleStruct(loc, recTy, Field1Ty, Field2Ty)) {
+ if (hasEnoughRegisters(loc, GARsLeft, FARsLeft, previousArguments,
+ Field1Ty, Field2Ty)) {
+ if (!isResult) {
+ if (Field1Ty)
+ marshal.emplace_back(Field1Ty, AT{});
+ if (Field2Ty)
+ marshal.emplace_back(Field2Ty, AT{});
+ } else {
+ // Field1Ty is always preferred over Field2Ty for assignment, so there
+ // will never be a case where Field1Ty == nullptr and Field2Ty !=
+ // nullptr.
+ if (Field1Ty && !Field2Ty)
+ marshal.emplace_back(Field1Ty, AT{});
+ else if (Field1Ty && Field2Ty)
+ marshal.emplace_back(
+ mlir::TupleType::get(context,
+ mlir::TypeRange{Field1Ty, Field2Ty}),
+ AT{/*alignment=*/0, /*byval=*/true});
+ }
+ return marshal;
+ }
+ }
+
+ if (recSize <= GRLenInChar) {
+ marshal.emplace_back(mlir::IntegerType::get(context, GRLen), AT{});
+ return marshal;
+ }
+
+ if (recAlign == 2 * GRLenInChar) {
+ marshal.emplace_back(mlir::IntegerType::get(context, 2 * GRLen), AT{});
+ return marshal;
+ }
+
+ // recSize > GRLenInChar && recSize <= 2 * GRLenInChar
+ marshal.emplace_back(
+ fir::SequenceType::get({2}, mlir::IntegerType::get(context, GRLen)),
+ AT{});
+ return marshal;
+ }
+
+ /// Marshal a derived type passed by value like a C struct.
+ CodeGenSpecifics::Marshalling
+ structArgumentType(mlir::Location loc, fir::RecordType recTy,
+ const Marshalling &previousArguments) const override {
+ int GARsLeft = 8;
+ int FARsLeft = FRLen ? 8 : 0;
+
+ return classifyStruct(loc, recTy, GARsLeft, FARsLeft, /*isResult=*/false,
+ previousArguments);
+ }
+
+ CodeGenSpecifics::Marshalling
+ structReturnType(mlir::Location loc, fir::RecordType recTy) const override {
+ // The rules for return and argument types are the same.
+ int GARsLeft = 2;
+ int FARsLeft = FRLen ? 2 : 0;
+ return classifyStruct(loc, recTy, GARsLeft, FARsLeft, /*isResult=*/true,
+ {});
+ }
};
} // namespace
diff --git a/flang/test/Fir/struct-passing-loongarch64-byreg.fir b/flang/test/Fir/struct-passing-loongarch64-byreg.fir
new file mode 100644
index 00000000000000..576ea6459e17a0
--- /dev/null
+++ b/flang/test/Fir/struct-passing-loongarch64-byreg.fir
@@ -0,0 +1,232 @@
+/// Test LoongArch64 ABI rewrite of struct passed by value (BIND(C), VALUE derived types).
+/// This test test cases where the struct can be passed in registers.
+/// Test cases can be roughly divided into two categories:
+/// - struct with a single intrinsic component;
+/// - sturct with more than one field;
+/// Since the argument marshalling logic is largely the same within each category,
+/// only the first example in each category checks the entire invocation process,
+/// while the other examples only check the signatures.
+
+// REQUIRES: loongarch-registered-target
+// RUN: fir-opt --split-input-file --target-rewrite="target=loongarch64-unknown-linux-gnu" %s | FileCheck %s
+
+
+/// *********************** Struct with a single intrinsic component *********************** ///
+
+!ty_i16 = !fir.type<ti16{i:i16}>
+!ty_i32 = !fir.type<ti32{i:i32}>
+!ty_i64 = !fir.type<ti64{i:i64}>
+!ty_i128 = !fir.type<ti128{i:i128}>
+!ty_f16 = !fir.type<tf16{i:f16}>
+!ty_f32 = !fir.type<tf32{i:f32}>
+!ty_f64 = !fir.type<tf64{i:f64}>
+!ty_f128 = !fir.type<tf128{i:f128}>
+!ty_bf16 = !fir.type<tbf16{i:bf16}>
+!ty_char1 = !fir.type<tchar1{i:!fir.char<1>}>
+!ty_char2 = !fir.type<tchar2{i:!fir.char<2>}>
+!ty_log1 = !fir.type<tlog1{i:!fir.logical<1>}>
+!ty_log2 = !fir.type<tlog2{i:!fir.logical<2>}>
+!ty_log4 = !fir.type<tlog4{i:!fir.logical<4>}>
+!ty_log8 = !fir.type<tlog8{i:!fir.logical<8>}>
+!ty_log16 = !fir.type<tlog16{i:!fir.logical<16>}>
+!ty_cmplx_f32 = !fir.type<tcmplx_f32{i:complex<f32>}>
+!ty_cmplx_f64 = !fir.type<tcmplx_f64{i:complex<f64>}>
+
+module attributes {fir.defaultkind = "a1c4d8i4l4r4", fir.kindmap = "", llvm.data_layout = "e-m:e-p:64:64-i64:64-i128:128-n32:64-S128", llvm.target_triple = "loongarch64-unknown-linux-gnu"} {
+
+// CHECK-LABEL: func.func private @test_func_i16(i64)
+func.func private @test_func_i16(%arg0: !ty_i16)
+// CHECK-LABEL: func.func @test_call_i16(
+// CHECK-SAME: %[[ARG0:.*]]: !fir.ref<!fir.type<ti16{i:i16}>>) {
+func.func @test_call_i16(%arg0: !fir.ref<!ty_i16>) {
+ // CHECK: %[[IN:.*]] = fir.load %[[ARG0]] : !fir.ref<!fir.type<ti16{i:i16}>>
+ // CHECK: %[[STACK:.*]] = llvm.intr.stacksave : !llvm.ptr
+ // CHECK: %[[ARR:.*]] = fir.alloca i64
+ // CHECK: %[[CVT:.*]] = fir.convert %[[ARR]] : (!fir.ref<i64>) -> !fir.ref<!fir.type<ti16{i:i16}>>
+ // CHECK: fir.store %[[IN]] to %[[CVT]] : !fir.ref<!fir.type<ti16{i:i16}>>
+ // CHECK: %[[LD:.*]] = fir.load %[[ARR]] : !fir.ref<i64>
+ %in = fir.load %arg0 : !fir.ref<!ty_i16>
+ // CHECK: fir.call @test_func_i16(%[[LD]]) : (i64) -> ()
+ // CHECK: llvm.intr.stackrestore %[[STACK]] : !llvm.ptr
+ fir.call @test_func_i16(%in) : (!ty_i16) -> ()
+ // CHECK: return
+ return
+}
+
+// CHECK-LABEL: func.func private @test_func_i32(i64)
+func.func private @test_func_i32(%arg0: !ty_i32)
+
+// CHECK-LABEL: func.func private @test_func_i64(i64)
+func.func private @test_func_i64(%arg0: !ty_i64)
+
+// CHECK-LABEL: func.func private @test_func_i128(i128)
+func.func private @test_func_i128(%arg0: !ty_i128)
+
+// CHECK-LABEL: func.func private @test_func_f16(i64)
+func.func private @test_func_f16(%arg0: !ty_f16)
+
+// CHECK-LABEL: func.func private @test_func_f32(f32)
+func.func private @test_func_f32(%arg0: !ty_f32)
+
+// CHECK-LABEL: func.func private @test_func_f64(f64)
+func.func private @test_func_f64(%arg0: !ty_f64)
+
+// CHECK-LABEL: func.func private @test_func_f128(i128)
+func.func private @test_func_f128(%arg0: !ty_f128)
+
+// CHECK-LABEL: func.func private @test_func_bf16(i64)
+func.func private @test_func_bf16(%arg0: !ty_bf16)
+
+// CHECK-LABEL: func.func private @test_func_char1(i64)
+func.func private @test_func_char1(%arg0: !ty_char1)
+
+// CHECK-LABEL: func.func private @test_func_char2(i64)
+func.func private @test_func_char2(%arg0: !ty_char2)
+
+// CHECK-LABEL: func.func private @test_func_log1(i64)
+func.func private @test_func_log1(%arg0: !ty_log1)
+
+// CHECK-LABEL: func.func private @test_func_log2(i64)
+func.func private @test_func_log2(%arg0: !ty_log2)
+
+// CHECK-LABEL: func.func private @test_func_log4(i64)
+func.func private @test_func_log4(%arg0: !ty_log4)
+
+// CHECK-LABEL: func.func private @test_func_log8(i64)
+func.func private @test_func_log8(%arg0: !ty_log8)
+
+// CHECK-LABEL: func.func private @test_func_log16(i128)
+func.func private @test_func_log16(%arg0: !ty_log16)
+
+// CHECK-LABEL: func.func private @test_func_cmplx_f32(f32, f32)
+func.func private @test_func_cmplx_f32(%arg0: !ty_cmplx_f32)
+
+// CHECK-LABEL: func.func private @test_func_cmplx_f64(f64, f64)
+func.func private @test_func_cmplx_f64(%arg0: !ty_cmplx_f64)
+}
+
+
+/// *************************** Struct with more than one field **************************** ///
+
+// -----
+
+!ty_i32_f32 = !fir.type<ti32_f32{i:i32,j:f32}>
+!ty_i32_f64 = !fir.type<ti32_f64{i:i32,j:f64}>
+!ty_i64_f32 = !fir.type<ti64_f32{i:i64,j:f32}>
+!ty_i64_f64 = !fir.type<ti64_f64{i:i64,j:f64}>
+!ty_f64_i64 = !fir.type<tf64_i64{i:f64,j:i64}>
+!ty_f16_f16 = !fir.type<tf16_f16{i:f16,j:f16}>
+!ty_f32_f32 = !fir.type<tf32_f32{i:f32,j:f32}>
+!ty_f64_f64 = !fir.type<tf64_f64{i:f64,j:f64}>
+!ty_f32_i32_i32 = !fir.type<tf32_i32_i32{i:f32,j:i32,k:i32}>
+!ty_f32_f32_i32 = !fir.type<tf32_f32_i32{i:f32,j:f32,k:i32}>
+!ty_f32_f32_f32 = !fir.type<tf32_f32_f32{i:f32,j:f32,k:f32}>
+
+!ty_i8_a8 = !fir.type<ti8_a8{i:!fir.array<8xi8>}>
+!ty_i8_a16 = !fir.type<ti8_a16{i:!fir.array<16xi8>}>
+!ty_f32_a2 = !fir.type<tf32_a2{i:!fir.array<2xf32>}>
+!ty_f64_a2 = !fir.type<tf64_a2{i:!fir.array<2xf64>}>
+!ty_nested_i32_f32 = !fir.type<t11{i:!ty_i32_f32}>
+!ty_nested_i8_a8_i32 = !fir.type<t12{i:!ty_i8_a8, j:i32}>
+!ty_char1_a8 = !fir.type<t_char_a8{i:!fir.array<8x!fir.char<1>>}>
+
+module attributes {fir.defaultkind = "a1c4d8i4l4r4", fir.kindmap = "", llvm.data_layout = "e-m:e-p:64:64-i64:64-i128:128-n32:64-S128", llvm.target_triple = "loongarch64-unknown-linux-gnu"} {
+
+// CHECK-LABEL: func.func private @test_func_i32_f32(i32, f32)
+func.func private @test_func_i32_f32(%arg0: !ty_i32_f32)
+// CHECK-LABEL: func.func @test_call_i32_f32(
+// CHECK-SAME: %[[ARG0:.*]]: !fir.ref<!fir.type<ti32_f32{i:i32,j:f32}>>) {
+func.func @test_call_i32_f32(%arg0: !fir.ref<!ty_i32_f32>) {
+ // CHECK: %[[IN:.*]] = fir.load %[[ARG0]] : !fir.ref<!fir.type<ti32_f32{i:i32,j:f32}>>
+ // CHECK: %[[STACK:.*]] = llvm.intr.stacksave : !llvm.ptr
+ // CHECK: %[[ARR:.*]] = fir.alloca tuple<i32, f32>
+ // CHECK: %[[CVT:.*]] = fir.convert %[[ARR]] : (!fir.ref<tuple<i32, f32>>) -> !fir.ref<!fir.type<ti32_f32{i:i32,j:f32}>>
+ // CHECK: fir.store %[[IN]] to %[[CVT]] : !fir.ref<!fir.type<ti32_f32{i:i32,j:f32}>>
+ // CHECK: %[[LD:.*]] = fir.load %[[ARR]] : !fir.ref<tuple<i32, f32>>
+ // CHECK: %[[VAL_0:.*]] = fir.extract_value %[[LD]], [0 : i32] : (tuple<i32, f32>) -> i32
+ // CHECK: %[[VAL_1:.*]] = fir.extract_value %[[LD]], [1 : i32] : (tuple<i32, f32>) -> f32
+ %in = fir.load %arg0 : !fir.ref<!ty_i32_f32>
+ // CHECK: fir.call @test_func_i32_f32(%[[VAL_0]], %[[VAL_...
[truncated]
|
@llvm/pr-subscribers-flang-fir-hlfir Author: Zhaoxin Yang (ylzsx) ChangesThis patch supports both the passing and returning of BIND(C) type parameters. Reference ABI: Patch is 40.24 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/117108.diff 4 Files Affected:
diff --git a/flang/lib/Optimizer/CodeGen/Target.cpp b/flang/lib/Optimizer/CodeGen/Target.cpp
index 9ec055b1aecabb..90ce51552c687f 100644
--- a/flang/lib/Optimizer/CodeGen/Target.cpp
+++ b/flang/lib/Optimizer/CodeGen/Target.cpp
@@ -1081,6 +1081,9 @@ struct TargetLoongArch64 : public GenericTarget<TargetLoongArch64> {
using GenericTarget::GenericTarget;
static constexpr int defaultWidth = 64;
+ static constexpr int GRLen = defaultWidth; /* eight bytes */
+ static constexpr int GRLenInChar = GRLen / 8;
+ static constexpr int FRLen = defaultWidth; /* eight bytes */
CodeGenSpecifics::Marshalling
complexArgumentType(mlir::Location loc, mlir::Type eleTy) const override {
@@ -1151,6 +1154,311 @@ struct TargetLoongArch64 : public GenericTarget<TargetLoongArch64> {
return GenericTarget::integerArgumentType(loc, argTy);
}
+
+ /// Flatten non-basic types, resulting in an array of types containing only
+ /// `IntegerType` and `FloatType`.
+ std::vector<mlir::Type> flattenTypeList(mlir::Location loc,
+ const mlir::Type type) const {
+ std::vector<mlir::Type> flatTypes;
+
+ llvm::TypeSwitch<mlir::Type>(type)
+ .template Case<mlir::IntegerType>([&](mlir::IntegerType intTy) {
+ if (intTy.getWidth() != 0)
+ flatTypes.push_back(intTy);
+ })
+ .template Case<mlir::FloatType>([&](mlir::FloatType floatTy) {
+ if (floatTy.getWidth() != 0)
+ flatTypes.push_back(floatTy);
+ })
+ .template Case<mlir::ComplexType>([&](mlir::ComplexType cmplx) {
+ const auto *sem = &floatToSemantics(kindMap, cmplx.getElementType());
+ if (sem == &llvm::APFloat::IEEEsingle() ||
+ sem == &llvm::APFloat::IEEEdouble() ||
+ sem == &llvm::APFloat::IEEEquad())
+ std::fill_n(std::back_inserter(flatTypes), 2,
+ cmplx.getElementType());
+ else
+ TODO(loc, "unsupported complx type(not IEEEsingle, IEEEdouble, "
+ "IEEEquad) as a structure component for BIND(C), "
+ "VALUE derived type argument and type return");
+ })
+ .template Case<fir::LogicalType>([&](fir::LogicalType logicalTy) {
+ const auto width = kindMap.getLogicalBitsize(logicalTy.getFKind());
+ if (width != 0)
+ flatTypes.push_back(
+ mlir::IntegerType::get(type.getContext(), width));
+ })
+ .template Case<fir::CharacterType>([&](fir::CharacterType charTy) {
+ flatTypes.push_back(mlir::IntegerType::get(type.getContext(), 8));
+ })
+ .template Case<fir::SequenceType>([&](fir::SequenceType seqTy) {
+ if (!seqTy.hasDynamicExtents()) {
+ std::size_t numOfEle = seqTy.getConstantArraySize();
+ auto eleTy = seqTy.getEleTy();
+ if (!mlir::isa<mlir::IntegerType, mlir::FloatType>(eleTy)) {
+ auto subTypeList = flattenTypeList(loc, eleTy);
+ if (subTypeList.size() != 0)
+ for (std::size_t i = 0; i < numOfEle; ++i)
+ llvm::copy(subTypeList, std::back_inserter(flatTypes));
+ } else {
+ std::fill_n(std::back_inserter(flatTypes), numOfEle, eleTy);
+ }
+ } else
+ TODO(loc, "unsupported dynamic extent sequence type as a structure "
+ "component for BIND(C), "
+ "VALUE derived type argument and type return");
+ })
+ .template Case<fir::RecordType>([&](fir::RecordType recTy) {
+ for (auto component : recTy.getTypeList()) {
+ mlir::Type eleTy = component.second;
+ auto subTypeList = flattenTypeList(loc, eleTy);
+ if (subTypeList.size() != 0)
+ llvm::copy(subTypeList, std::back_inserter(flatTypes));
+ }
+ })
+ .template Case<fir::VectorType>([&](fir::VectorType vecTy) {
+ std::size_t numOfEle = vecTy.getLen();
+ auto eleTy = vecTy.getEleTy();
+ if (!(mlir::isa<mlir::IntegerType, mlir::FloatType>(eleTy))) {
+ auto subTypeList = flattenTypeList(loc, eleTy);
+ if (subTypeList.size() != 0)
+ for (std::size_t i = 0; i < numOfEle; ++i)
+ llvm::copy(subTypeList, std::back_inserter(flatTypes));
+ } else {
+ std::fill_n(std::back_inserter(flatTypes), numOfEle, eleTy);
+ }
+ })
+ .Default([&](mlir::Type ty) {
+ if (fir::conformsWithPassByRef(ty))
+ flatTypes.push_back(
+ mlir::IntegerType::get(type.getContext(), GRLen));
+ else
+ TODO(loc, "unsupported component type for BIND(C), VALUE derived "
+ "type argument and type return");
+ });
+
+ return flatTypes;
+ }
+
+ /// Determine if a struct is eligible to be passed in FARs (and GARs) (i.e.,
+ /// when flattened it contains a single fp value, fp+fp, or int+fp of
+ /// appropriate size).
+ bool detectFARsEligibleStruct(mlir::Location loc, fir::RecordType recTy,
+ mlir::Type &Field1Ty,
+ mlir::Type &Field2Ty) const {
+
+ Field1Ty = Field2Ty = nullptr;
+ auto flatTypes = flattenTypeList(loc, recTy);
+ size_t flatSize = flatTypes.size();
+
+ // Cannot be eligible if the number of flattened types is equal to 0 or
+ // greater than 2.
+ if (flatSize == 0 || flatSize > 2)
+ return false;
+
+ bool isFirstAvaliableFloat = false;
+
+ assert((mlir::isa<mlir::IntegerType, mlir::FloatType>(flatTypes[0])) &&
+ "Type must be int or float after flattening");
+ if (auto floatTy = mlir::dyn_cast<mlir::FloatType>(flatTypes[0])) {
+ auto Size = floatTy.getWidth();
+ // Can't be eligible if larger than the FP registers. Half precision isn't
+ // currently supported on LoongArch and the ABI hasn't been confirmed, so
+ // default to the integer ABI in that case.
+ if (Size > FRLen || Size < 32)
+ return false;
+ isFirstAvaliableFloat = true;
+ Field1Ty = floatTy;
+ } else if (auto intTy = mlir::dyn_cast<mlir::IntegerType>(flatTypes[0])) {
+ if (intTy.getWidth() > GRLen)
+ return false;
+ Field1Ty = intTy;
+ }
+
+ // flatTypes has two elements
+ if (flatSize == 2) {
+ assert((mlir::isa<mlir::IntegerType, mlir::FloatType>(flatTypes[1])) &&
+ "Type must be integer or float after flattening");
+ if (auto floatTy = mlir::dyn_cast<mlir::FloatType>(flatTypes[1])) {
+ auto Size = floatTy.getWidth();
+ if (Size > FRLen || Size < 32)
+ return false;
+ Field2Ty = floatTy;
+ return true;
+ } else if (auto intTy = mlir::dyn_cast<mlir::IntegerType>(flatTypes[1])) {
+ // Can't be eligible if an integer type was already found (int+int pairs
+ // are not eligible).
+ if (!isFirstAvaliableFloat)
+ return false;
+ if (intTy.getWidth() > GRLen)
+ return false;
+ Field2Ty = intTy;
+ return true;
+ }
+ }
+
+ // return isFirstAvaliableFloat if flatTypes only has one element
+ return isFirstAvaliableFloat;
+ }
+
+ bool checkTypehasEnoughReg(mlir::Location loc, int &GARsLeft, int &FARsLeft,
+ const mlir::Type type) const {
+ if (type == nullptr)
+ return true;
+
+ llvm::TypeSwitch<mlir::Type>(type)
+ .template Case<mlir::IntegerType>([&](mlir::IntegerType intTy) {
+ const auto width = intTy.getWidth();
+ assert(width <= 128 &&
+ "integer type with width more than 128 bits is unexpected");
+ if (width == 0)
+ return;
+ if (width <= GRLen)
+ --GARsLeft;
+ else if (width <= 2 * GRLen)
+ GARsLeft = GARsLeft - 2;
+ })
+ .template Case<mlir::FloatType>([&](mlir::FloatType floatTy) {
+ const auto width = floatTy.getWidth();
+ assert(width <= 128 &&
+ "float type with width more than 128 bits is unexpected");
+ if (width == 0)
+ return;
+ if (width == 32 || width == 64)
+ --FARsLeft;
+ else if (width <= GRLen)
+ --GARsLeft;
+ else if (width <= 2 * GRLen)
+ GARsLeft = GARsLeft - 2;
+ })
+ .Default([&](mlir::Type ty) {
+ if (fir::conformsWithPassByRef(ty))
+ --GARsLeft; // Pointers.
+ else
+ TODO(loc, "unsupported component type for BIND(C), VALUE derived "
+ "type argument and type return");
+ });
+
+ return GARsLeft >= 0 && FARsLeft >= 0;
+ }
+
+ bool hasEnoughRegisters(mlir::Location loc, int GARsLeft, int FARsLeft,
+ const Marshalling &previousArguments,
+ const mlir::Type &Field1Ty,
+ const mlir::Type &Field2Ty) const {
+
+ for (auto typeAndAttr : previousArguments) {
+ const auto &attr = std::get<Attributes>(typeAndAttr);
+ if (attr.isByVal()) {
+ // Previous argument passed on the stack, and its address is passed in
+ // GAR.
+ --GARsLeft;
+ continue;
+ }
+
+ // Previous aggregate arguments were marshalled into simpler arguments.
+ const auto &type = std::get<mlir::Type>(typeAndAttr);
+ auto flatTypes = flattenTypeList(loc, type);
+
+ for (auto &flatTy : flatTypes) {
+ if (!checkTypehasEnoughReg(loc, GARsLeft, FARsLeft, flatTy))
+ return false;
+ }
+ }
+
+ if (!checkTypehasEnoughReg(loc, GARsLeft, FARsLeft, Field1Ty))
+ return false;
+ if (!checkTypehasEnoughReg(loc, GARsLeft, FARsLeft, Field2Ty))
+ return false;
+ return true;
+ }
+
+ /// LoongArch64 subroutine calling sequence ABI in:
+ /// https://github.com/loongson/la-abi-specs/blob/release/lapcs.adoc#subroutine-calling-sequence
+ CodeGenSpecifics::Marshalling
+ classifyStruct(mlir::Location loc, fir::RecordType recTy, int GARsLeft,
+ int FARsLeft, bool isResult,
+ const Marshalling &previousArguments) const {
+ CodeGenSpecifics::Marshalling marshal;
+
+ auto [recSize, recAlign] = fir::getTypeSizeAndAlignmentOrCrash(
+ loc, recTy, getDataLayout(), kindMap);
+ auto context = recTy.getContext();
+
+ if (recSize == 0) {
+ TODO(loc, "unsupported empty struct type for BIND(C), "
+ "VALUE derived type argument and type return");
+ }
+
+ if (recSize > 2 * GRLenInChar) {
+ marshal.emplace_back(
+ fir::ReferenceType::get(recTy),
+ AT{recAlign, /*byval=*/!isResult, /*sret=*/isResult});
+ return marshal;
+ }
+
+ // Pass by FARs(and GARs)
+ mlir::Type Field1Ty = nullptr, Field2Ty = nullptr;
+ if (detectFARsEligibleStruct(loc, recTy, Field1Ty, Field2Ty)) {
+ if (hasEnoughRegisters(loc, GARsLeft, FARsLeft, previousArguments,
+ Field1Ty, Field2Ty)) {
+ if (!isResult) {
+ if (Field1Ty)
+ marshal.emplace_back(Field1Ty, AT{});
+ if (Field2Ty)
+ marshal.emplace_back(Field2Ty, AT{});
+ } else {
+ // Field1Ty is always preferred over Field2Ty for assignment, so there
+ // will never be a case where Field1Ty == nullptr and Field2Ty !=
+ // nullptr.
+ if (Field1Ty && !Field2Ty)
+ marshal.emplace_back(Field1Ty, AT{});
+ else if (Field1Ty && Field2Ty)
+ marshal.emplace_back(
+ mlir::TupleType::get(context,
+ mlir::TypeRange{Field1Ty, Field2Ty}),
+ AT{/*alignment=*/0, /*byval=*/true});
+ }
+ return marshal;
+ }
+ }
+
+ if (recSize <= GRLenInChar) {
+ marshal.emplace_back(mlir::IntegerType::get(context, GRLen), AT{});
+ return marshal;
+ }
+
+ if (recAlign == 2 * GRLenInChar) {
+ marshal.emplace_back(mlir::IntegerType::get(context, 2 * GRLen), AT{});
+ return marshal;
+ }
+
+ // recSize > GRLenInChar && recSize <= 2 * GRLenInChar
+ marshal.emplace_back(
+ fir::SequenceType::get({2}, mlir::IntegerType::get(context, GRLen)),
+ AT{});
+ return marshal;
+ }
+
+ /// Marshal a derived type passed by value like a C struct.
+ CodeGenSpecifics::Marshalling
+ structArgumentType(mlir::Location loc, fir::RecordType recTy,
+ const Marshalling &previousArguments) const override {
+ int GARsLeft = 8;
+ int FARsLeft = FRLen ? 8 : 0;
+
+ return classifyStruct(loc, recTy, GARsLeft, FARsLeft, /*isResult=*/false,
+ previousArguments);
+ }
+
+ CodeGenSpecifics::Marshalling
+ structReturnType(mlir::Location loc, fir::RecordType recTy) const override {
+ // The rules for return and argument types are the same.
+ int GARsLeft = 2;
+ int FARsLeft = FRLen ? 2 : 0;
+ return classifyStruct(loc, recTy, GARsLeft, FARsLeft, /*isResult=*/true,
+ {});
+ }
};
} // namespace
diff --git a/flang/test/Fir/struct-passing-loongarch64-byreg.fir b/flang/test/Fir/struct-passing-loongarch64-byreg.fir
new file mode 100644
index 00000000000000..576ea6459e17a0
--- /dev/null
+++ b/flang/test/Fir/struct-passing-loongarch64-byreg.fir
@@ -0,0 +1,232 @@
+/// Test LoongArch64 ABI rewrite of struct passed by value (BIND(C), VALUE derived types).
+/// This test test cases where the struct can be passed in registers.
+/// Test cases can be roughly divided into two categories:
+/// - struct with a single intrinsic component;
+/// - sturct with more than one field;
+/// Since the argument marshalling logic is largely the same within each category,
+/// only the first example in each category checks the entire invocation process,
+/// while the other examples only check the signatures.
+
+// REQUIRES: loongarch-registered-target
+// RUN: fir-opt --split-input-file --target-rewrite="target=loongarch64-unknown-linux-gnu" %s | FileCheck %s
+
+
+/// *********************** Struct with a single intrinsic component *********************** ///
+
+!ty_i16 = !fir.type<ti16{i:i16}>
+!ty_i32 = !fir.type<ti32{i:i32}>
+!ty_i64 = !fir.type<ti64{i:i64}>
+!ty_i128 = !fir.type<ti128{i:i128}>
+!ty_f16 = !fir.type<tf16{i:f16}>
+!ty_f32 = !fir.type<tf32{i:f32}>
+!ty_f64 = !fir.type<tf64{i:f64}>
+!ty_f128 = !fir.type<tf128{i:f128}>
+!ty_bf16 = !fir.type<tbf16{i:bf16}>
+!ty_char1 = !fir.type<tchar1{i:!fir.char<1>}>
+!ty_char2 = !fir.type<tchar2{i:!fir.char<2>}>
+!ty_log1 = !fir.type<tlog1{i:!fir.logical<1>}>
+!ty_log2 = !fir.type<tlog2{i:!fir.logical<2>}>
+!ty_log4 = !fir.type<tlog4{i:!fir.logical<4>}>
+!ty_log8 = !fir.type<tlog8{i:!fir.logical<8>}>
+!ty_log16 = !fir.type<tlog16{i:!fir.logical<16>}>
+!ty_cmplx_f32 = !fir.type<tcmplx_f32{i:complex<f32>}>
+!ty_cmplx_f64 = !fir.type<tcmplx_f64{i:complex<f64>}>
+
+module attributes {fir.defaultkind = "a1c4d8i4l4r4", fir.kindmap = "", llvm.data_layout = "e-m:e-p:64:64-i64:64-i128:128-n32:64-S128", llvm.target_triple = "loongarch64-unknown-linux-gnu"} {
+
+// CHECK-LABEL: func.func private @test_func_i16(i64)
+func.func private @test_func_i16(%arg0: !ty_i16)
+// CHECK-LABEL: func.func @test_call_i16(
+// CHECK-SAME: %[[ARG0:.*]]: !fir.ref<!fir.type<ti16{i:i16}>>) {
+func.func @test_call_i16(%arg0: !fir.ref<!ty_i16>) {
+ // CHECK: %[[IN:.*]] = fir.load %[[ARG0]] : !fir.ref<!fir.type<ti16{i:i16}>>
+ // CHECK: %[[STACK:.*]] = llvm.intr.stacksave : !llvm.ptr
+ // CHECK: %[[ARR:.*]] = fir.alloca i64
+ // CHECK: %[[CVT:.*]] = fir.convert %[[ARR]] : (!fir.ref<i64>) -> !fir.ref<!fir.type<ti16{i:i16}>>
+ // CHECK: fir.store %[[IN]] to %[[CVT]] : !fir.ref<!fir.type<ti16{i:i16}>>
+ // CHECK: %[[LD:.*]] = fir.load %[[ARR]] : !fir.ref<i64>
+ %in = fir.load %arg0 : !fir.ref<!ty_i16>
+ // CHECK: fir.call @test_func_i16(%[[LD]]) : (i64) -> ()
+ // CHECK: llvm.intr.stackrestore %[[STACK]] : !llvm.ptr
+ fir.call @test_func_i16(%in) : (!ty_i16) -> ()
+ // CHECK: return
+ return
+}
+
+// CHECK-LABEL: func.func private @test_func_i32(i64)
+func.func private @test_func_i32(%arg0: !ty_i32)
+
+// CHECK-LABEL: func.func private @test_func_i64(i64)
+func.func private @test_func_i64(%arg0: !ty_i64)
+
+// CHECK-LABEL: func.func private @test_func_i128(i128)
+func.func private @test_func_i128(%arg0: !ty_i128)
+
+// CHECK-LABEL: func.func private @test_func_f16(i64)
+func.func private @test_func_f16(%arg0: !ty_f16)
+
+// CHECK-LABEL: func.func private @test_func_f32(f32)
+func.func private @test_func_f32(%arg0: !ty_f32)
+
+// CHECK-LABEL: func.func private @test_func_f64(f64)
+func.func private @test_func_f64(%arg0: !ty_f64)
+
+// CHECK-LABEL: func.func private @test_func_f128(i128)
+func.func private @test_func_f128(%arg0: !ty_f128)
+
+// CHECK-LABEL: func.func private @test_func_bf16(i64)
+func.func private @test_func_bf16(%arg0: !ty_bf16)
+
+// CHECK-LABEL: func.func private @test_func_char1(i64)
+func.func private @test_func_char1(%arg0: !ty_char1)
+
+// CHECK-LABEL: func.func private @test_func_char2(i64)
+func.func private @test_func_char2(%arg0: !ty_char2)
+
+// CHECK-LABEL: func.func private @test_func_log1(i64)
+func.func private @test_func_log1(%arg0: !ty_log1)
+
+// CHECK-LABEL: func.func private @test_func_log2(i64)
+func.func private @test_func_log2(%arg0: !ty_log2)
+
+// CHECK-LABEL: func.func private @test_func_log4(i64)
+func.func private @test_func_log4(%arg0: !ty_log4)
+
+// CHECK-LABEL: func.func private @test_func_log8(i64)
+func.func private @test_func_log8(%arg0: !ty_log8)
+
+// CHECK-LABEL: func.func private @test_func_log16(i128)
+func.func private @test_func_log16(%arg0: !ty_log16)
+
+// CHECK-LABEL: func.func private @test_func_cmplx_f32(f32, f32)
+func.func private @test_func_cmplx_f32(%arg0: !ty_cmplx_f32)
+
+// CHECK-LABEL: func.func private @test_func_cmplx_f64(f64, f64)
+func.func private @test_func_cmplx_f64(%arg0: !ty_cmplx_f64)
+}
+
+
+/// *************************** Struct with more than one field **************************** ///
+
+// -----
+
+!ty_i32_f32 = !fir.type<ti32_f32{i:i32,j:f32}>
+!ty_i32_f64 = !fir.type<ti32_f64{i:i32,j:f64}>
+!ty_i64_f32 = !fir.type<ti64_f32{i:i64,j:f32}>
+!ty_i64_f64 = !fir.type<ti64_f64{i:i64,j:f64}>
+!ty_f64_i64 = !fir.type<tf64_i64{i:f64,j:i64}>
+!ty_f16_f16 = !fir.type<tf16_f16{i:f16,j:f16}>
+!ty_f32_f32 = !fir.type<tf32_f32{i:f32,j:f32}>
+!ty_f64_f64 = !fir.type<tf64_f64{i:f64,j:f64}>
+!ty_f32_i32_i32 = !fir.type<tf32_i32_i32{i:f32,j:i32,k:i32}>
+!ty_f32_f32_i32 = !fir.type<tf32_f32_i32{i:f32,j:f32,k:i32}>
+!ty_f32_f32_f32 = !fir.type<tf32_f32_f32{i:f32,j:f32,k:f32}>
+
+!ty_i8_a8 = !fir.type<ti8_a8{i:!fir.array<8xi8>}>
+!ty_i8_a16 = !fir.type<ti8_a16{i:!fir.array<16xi8>}>
+!ty_f32_a2 = !fir.type<tf32_a2{i:!fir.array<2xf32>}>
+!ty_f64_a2 = !fir.type<tf64_a2{i:!fir.array<2xf64>}>
+!ty_nested_i32_f32 = !fir.type<t11{i:!ty_i32_f32}>
+!ty_nested_i8_a8_i32 = !fir.type<t12{i:!ty_i8_a8, j:i32}>
+!ty_char1_a8 = !fir.type<t_char_a8{i:!fir.array<8x!fir.char<1>>}>
+
+module attributes {fir.defaultkind = "a1c4d8i4l4r4", fir.kindmap = "", llvm.data_layout = "e-m:e-p:64:64-i64:64-i128:128-n32:64-S128", llvm.target_triple = "loongarch64-unknown-linux-gnu"} {
+
+// CHECK-LABEL: func.func private @test_func_i32_f32(i32, f32)
+func.func private @test_func_i32_f32(%arg0: !ty_i32_f32)
+// CHECK-LABEL: func.func @test_call_i32_f32(
+// CHECK-SAME: %[[ARG0:.*]]: !fir.ref<!fir.type<ti32_f32{i:i32,j:f32}>>) {
+func.func @test_call_i32_f32(%arg0: !fir.ref<!ty_i32_f32>) {
+ // CHECK: %[[IN:.*]] = fir.load %[[ARG0]] : !fir.ref<!fir.type<ti32_f32{i:i32,j:f32}>>
+ // CHECK: %[[STACK:.*]] = llvm.intr.stacksave : !llvm.ptr
+ // CHECK: %[[ARR:.*]] = fir.alloca tuple<i32, f32>
+ // CHECK: %[[CVT:.*]] = fir.convert %[[ARR]] : (!fir.ref<tuple<i32, f32>>) -> !fir.ref<!fir.type<ti32_f32{i:i32,j:f32}>>
+ // CHECK: fir.store %[[IN]] to %[[CVT]] : !fir.ref<!fir.type<ti32_f32{i:i32,j:f32}>>
+ // CHECK: %[[LD:.*]] = fir.load %[[ARR]] : !fir.ref<tuple<i32, f32>>
+ // CHECK: %[[VAL_0:.*]] = fir.extract_value %[[LD]], [0 : i32] : (tuple<i32, f32>) -> i32
+ // CHECK: %[[VAL_1:.*]] = fir.extract_value %[[LD]], [1 : i32] : (tuple<i32, f32>) -> f32
+ %in = fir.load %arg0 : !fir.ref<!ty_i32_f32>
+ // CHECK: fir.call @test_func_i32_f32(%[[VAL_0]], %[[VAL_...
[truncated]
|
cc @SixWeining |
This patch: - Adds `mabi` check for LoongArch64. Currently, flang only supports `mabi=` option set to `lp64d` in LoongArch64, other ABIs will report an error and may be supported in the future.
Hi, @tblah, Could you take a look and provide your feedback? Thanks in advance! It supports both the passing and returning of BIND(C) type parameters for LoongArch. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code changes look good to me. I am not at all familiar with LoongArch64 so it would be best if somebody else reviews that this implements the ABI correctly.
Currently, the function `fir::getTypeSizeAndAlignmentOrCrash` does not yet handle the size and alignment for `fir::VectorType`, but we are still using it for now. As a result, it will report a TODO message, and this functionality will be implemented in a future patch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for the LoongArch bits.
LLVM Buildbot has detected a new failure on builder Full details are available at: https://lab.llvm.org/buildbot/#/builders/171/builds/11289 Here is the relevant piece of the build log for the reference
|
This patch:
mabi
check for LoongArch64. Currently, flang only supportsmabi=
optionset to
lp64d
in LoongArch64, other ABIs will report an error and may be supportedin the future.
Reference ABI:
https://github.com/loongson/la-abi-specs/blob/release/lapcs.adoc#subroutine-calling-sequence