Skip to content

[clang] Fix bitfield access unit for vbase corner case #87238

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 1, 2024

Conversation

urnathan
Copy link
Contributor

@urnathan urnathan commented Apr 1, 2024

This fixes #87227 My change to bitfield access unit allocation (#65742) causes an ICE for a corner case of vbase allocation: a class where an unshared (i.e. not the nearly-empty base optimization) vbase is placed below nvsize. Unfortunately, although there was a testcase for such a class layout, it didn't have the immediately preceding bitfield -- the reason the scissor needs to be correct.

The fix is to break out the scissor calculation from accumulateVbases, and have allocateBitfields be aware it is creating either the base subobject, or the complete object. Then it can call the scissor calculator to get the appropriate upper bound. The scissor calculation can cause a base walk, I thought it best to cache the result in allocateBitfields, as we can reach that point multiple times with unfortunately-sized bitfield spans.

In breaking out the scissor calculation, I discovered a couple more member fns that could be const qualified -- as before do you want that as a separate PR?

We must account for unshared vbases that are allocated below nvsize.
Copy link
Contributor

@rjmccall rjmccall left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doing it in one commit this time seems fine.

@urnathan urnathan marked this pull request as ready for review April 1, 2024 19:38
@llvmbot llvmbot added clang Clang issues not falling into any other category clang:codegen IR generation bugs: mangling, exceptions, etc. labels Apr 1, 2024
@llvmbot
Copy link
Member

llvmbot commented Apr 1, 2024

@llvm/pr-subscribers-clang-codegen

@llvm/pr-subscribers-clang

Author: Nathan Sidwell (urnathan)

Changes

This fixes #87227 My change to bitfield access unit allocation (#65742) causes an ICE for a corner case of vbase allocation: a class where an unshared (i.e. not the nearly-empty base optimization) vbase is placed below nvsize. Unfortunately, although there was a testcase for such a class layout, it didn't have the immediately preceding bitfield -- the reason the scissor needs to be correct.

The fix is to break out the scissor calculation from accumulateVbases, and have allocateBitfields be aware it is creating either the base subobject, or the complete object. Then it can call the scissor calculator to get the appropriate upper bound. The scissor calculation can cause a base walk, I thought it best to cache the result in allocateBitfields, as we can reach that point multiple times with unfortunately-sized bitfield spans.

In breaking out the scissor calculation, I discovered a couple more member fns that could be const qualified -- as before do you want that as a separate PR?


Full diff: https://github.com/llvm/llvm-project/pull/87238.diff

2 Files Affected:

  • (modified) clang/lib/CodeGen/CGRecordLayoutBuilder.cpp (+40-17)
  • (modified) clang/test/CodeGenCXX/bitfield-access-tail.cpp (+73-31)
diff --git a/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp b/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp
index e32023aeac1e6f..634a55fec5182e 100644
--- a/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp
+++ b/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp
@@ -185,9 +185,10 @@ struct CGRecordLowering {
   /// Lowers an ASTRecordLayout to a llvm type.
   void lower(bool NonVirtualBaseType);
   void lowerUnion(bool isNoUniqueAddress);
-  void accumulateFields();
+  void accumulateFields(bool isNonVirtualBaseType);
   RecordDecl::field_iterator
-  accumulateBitFields(RecordDecl::field_iterator Field,
+  accumulateBitFields(bool isNonVirtualBaseType,
+                      RecordDecl::field_iterator Field,
                       RecordDecl::field_iterator FieldEnd);
   void computeVolatileBitfields();
   void accumulateBases();
@@ -195,8 +196,10 @@ struct CGRecordLowering {
   void accumulateVBases();
   /// Recursively searches all of the bases to find out if a vbase is
   /// not the primary vbase of some base class.
-  bool hasOwnStorage(const CXXRecordDecl *Decl, const CXXRecordDecl *Query);
+  bool hasOwnStorage(const CXXRecordDecl *Decl,
+                     const CXXRecordDecl *Query) const;
   void calculateZeroInit();
+  CharUnits calculateTailClippingOffset(bool isNonVirtualBaseType) const;
   /// Lowers bitfield storage types to I8 arrays for bitfields with tail
   /// padding that is or can potentially be used.
   void clipTailPadding();
@@ -287,7 +290,7 @@ void CGRecordLowering::lower(bool NVBaseType) {
     computeVolatileBitfields();
     return;
   }
-  accumulateFields();
+  accumulateFields(NVBaseType);
   // RD implies C++.
   if (RD) {
     accumulateVPtrs();
@@ -378,12 +381,12 @@ void CGRecordLowering::lowerUnion(bool isNoUniqueAddress) {
     Packed = true;
 }
 
-void CGRecordLowering::accumulateFields() {
+void CGRecordLowering::accumulateFields(bool isNonVirtualBaseType) {
   for (RecordDecl::field_iterator Field = D->field_begin(),
                                   FieldEnd = D->field_end();
        Field != FieldEnd;) {
     if (Field->isBitField()) {
-      Field = accumulateBitFields(Field, FieldEnd);
+      Field = accumulateBitFields(isNonVirtualBaseType, Field, FieldEnd);
       assert((Field == FieldEnd || !Field->isBitField()) &&
              "Failed to accumulate all the bitfields");
     } else if (Field->isZeroSize(Context)) {
@@ -404,9 +407,12 @@ void CGRecordLowering::accumulateFields() {
 }
 
 // Create members for bitfields. Field is a bitfield, and FieldEnd is the end
-// iterator of the record. Return the first non-bitfield encountered.
+// iterator of the record. Return the first non-bitfield encountered.  We need
+// to know whether this is the base or complete layout, as virtual bases could
+// affect the upper bound of bitfield access unit allocation.
 RecordDecl::field_iterator
-CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
+CGRecordLowering::accumulateBitFields(bool isNonVirtualBaseType,
+                                      RecordDecl::field_iterator Field,
                                       RecordDecl::field_iterator FieldEnd) {
   if (isDiscreteBitFieldABI()) {
     // Run stores the first element of the current run of bitfields. FieldEnd is
@@ -505,6 +511,10 @@ CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
       bitsToCharUnits(Context.getTargetInfo().getRegisterWidth());
   unsigned CharBits = Context.getCharWidth();
 
+  // Limit of useable tail padding at end of the record. Computed lazily and
+  // cached here.
+  CharUnits ScissorOffset = CharUnits::Zero();
+
   // Data about the start of the span we're accumulating to create an access
   // unit from. Begin is the first bitfield of the span. If Begin is FieldEnd,
   // we've not got a current span. The span starts at the BeginOffset character
@@ -630,10 +640,14 @@ CGRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
               LimitOffset = bitsToCharUnits(getFieldBitOffset(*Probe));
               goto FoundLimit;
             }
-          // We reached the end of the fields.  We can't necessarily use tail
-          // padding in C++ structs, so the NonVirtual size is what we must
-          // use there.
-          LimitOffset = RD ? Layout.getNonVirtualSize() : Layout.getDataSize();
+          // We reached the end of the fields, determine the bounds of useable
+          // tail padding. As this can be complex for C++, we cache the result.
+          if (ScissorOffset.isZero()) {
+            ScissorOffset = calculateTailClippingOffset(isNonVirtualBaseType);
+            assert(!ScissorOffset.isZero() && "Tail clipping at zero");
+          }
+
+          LimitOffset = ScissorOffset;
         FoundLimit:;
 
           CharUnits TypeSize = getSize(Type);
@@ -838,13 +852,17 @@ void CGRecordLowering::accumulateVPtrs() {
                    llvm::PointerType::getUnqual(Types.getLLVMContext())));
 }
 
-void CGRecordLowering::accumulateVBases() {
+CharUnits
+CGRecordLowering::calculateTailClippingOffset(bool isNonVirtualBaseType) const {
+  if (!RD)
+    return Layout.getDataSize();
+
   CharUnits ScissorOffset = Layout.getNonVirtualSize();
   // In the itanium ABI, it's possible to place a vbase at a dsize that is
   // smaller than the nvsize.  Here we check to see if such a base is placed
   // before the nvsize and set the scissor offset to that, instead of the
   // nvsize.
-  if (isOverlappingVBaseABI())
+  if (!isNonVirtualBaseType && isOverlappingVBaseABI())
     for (const auto &Base : RD->vbases()) {
       const CXXRecordDecl *BaseDecl = Base.getType()->getAsCXXRecordDecl();
       if (BaseDecl->isEmpty())
@@ -856,8 +874,13 @@ void CGRecordLowering::accumulateVBases() {
       ScissorOffset = std::min(ScissorOffset,
                                Layout.getVBaseClassOffset(BaseDecl));
     }
-  Members.push_back(MemberInfo(ScissorOffset, MemberInfo::Scissor, nullptr,
-                               RD));
+
+  return ScissorOffset;
+}
+
+void CGRecordLowering::accumulateVBases() {
+  Members.push_back(MemberInfo(calculateTailClippingOffset(false),
+                               MemberInfo::Scissor, nullptr, RD));
   for (const auto &Base : RD->vbases()) {
     const CXXRecordDecl *BaseDecl = Base.getType()->getAsCXXRecordDecl();
     if (BaseDecl->isEmpty())
@@ -882,7 +905,7 @@ void CGRecordLowering::accumulateVBases() {
 }
 
 bool CGRecordLowering::hasOwnStorage(const CXXRecordDecl *Decl,
-                                     const CXXRecordDecl *Query) {
+                                     const CXXRecordDecl *Query) const {
   const ASTRecordLayout &DeclLayout = Context.getASTRecordLayout(Decl);
   if (DeclLayout.isPrimaryBaseVirtual() && DeclLayout.getPrimaryBase() == Query)
     return false;
diff --git a/clang/test/CodeGenCXX/bitfield-access-tail.cpp b/clang/test/CodeGenCXX/bitfield-access-tail.cpp
index 68716fdf3b1daa..1539e17cad4369 100644
--- a/clang/test/CodeGenCXX/bitfield-access-tail.cpp
+++ b/clang/test/CodeGenCXX/bitfield-access-tail.cpp
@@ -2,45 +2,45 @@
 
 // Configs that have cheap unaligned access
 // Little Endian
-// RUN: %clang_cc1 -triple=aarch64-apple-darwin %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
-// RUN: %clang_cc1 -triple=aarch64-linux-gnu %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
+// RUN: %clang_cc1 -triple=aarch64-apple-darwin %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT64 %s
+// RUN: %clang_cc1 -triple=aarch64-linux-gnu %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT64 %s
 // RUN: %clang_cc1 -triple=arm-apple-darwin %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT-DWN32 %s
-// RUN: %clang_cc1 -triple=arm-none-eabi %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
-// RUN: %clang_cc1 -triple=i686-linux-gnu %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
-// RUN: %clang_cc1 -triple=loongarch64-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
-// RUN: %clang_cc1 -triple=powerpcle-linux-gnu %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
-// RUN: %clang_cc1 -triple=ve-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
-// RUN: %clang_cc1 -triple=wasm32 %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
-// RUN: %clang_cc1 -triple=wasm64 %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
-// RUN: %clang_cc1 -triple=x86_64-linux-gnu %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
+// RUN: %clang_cc1 -triple=arm-none-eabi %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT32 %s
+// RUN: %clang_cc1 -triple=i686-linux-gnu %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT32 %s
+// RUN: %clang_cc1 -triple=loongarch64-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT64 %s
+// RUN: %clang_cc1 -triple=powerpcle-linux-gnu %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT32 %s
+// RUN: %clang_cc1 -triple=ve-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT64 %s
+// RUN: %clang_cc1 -triple=wasm32 %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT32 %s
+// RUN: %clang_cc1 -triple=wasm64 %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT64 %s
+// RUN: %clang_cc1 -triple=x86_64-linux-gnu %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT64 %s
 
 // Big Endian
-// RUN: %clang_cc1 -triple=powerpc-linux-gnu %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
-// RUN: %clang_cc1 -triple=powerpc64-linux-gnu %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
-// RUN: %clang_cc1 -triple=systemz %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
+// RUN: %clang_cc1 -triple=powerpc-linux-gnu %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT32 %s
+// RUN: %clang_cc1 -triple=powerpc64-linux-gnu %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT64 %s
+// RUN: %clang_cc1 -triple=systemz %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT64 %s
 
 // Configs that have expensive unaligned access
 // Little Endian
-// RUN: %clang_cc1 -triple=amdgcn-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
-// RUN: %clang_cc1 -triple=arc-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
-// RUN: %clang_cc1 -triple=bpf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
-// RUN: %clang_cc1 -triple=csky %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
-// RUN: %clang_cc1 -triple=hexagon-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
-// RUN: %clang_cc1 -triple=le64-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
-// RUN: %clang_cc1 -triple=loongarch32-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
-// RUN: %clang_cc1 -triple=nvptx-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
-// RUN: %clang_cc1 -triple=riscv32 %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
-// RUN: %clang_cc1 -triple=riscv64 %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
-// RUN: %clang_cc1 -triple=spir-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
-// RUN: %clang_cc1 -triple=xcore-none-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
+// RUN: %clang_cc1 -triple=amdgcn-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT64 %s
+// RUN: %clang_cc1 -triple=arc-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT32 %s
+// RUN: %clang_cc1 -triple=bpf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT64 %s
+// RUN: %clang_cc1 -triple=csky %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT32 %s
+// RUN: %clang_cc1 -triple=hexagon-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT32 %s
+// RUN: %clang_cc1 -triple=le64-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT64 %s
+// RUN: %clang_cc1 -triple=loongarch32-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT32 %s
+// RUN: %clang_cc1 -triple=nvptx-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT32 %s
+// RUN: %clang_cc1 -triple=riscv32 %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT32 %s
+// RUN: %clang_cc1 -triple=riscv64 %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT64 %s
+// RUN: %clang_cc1 -triple=spir-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT32 %s
+// RUN: %clang_cc1 -triple=xcore-none-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT32 %s
 
 // Big endian
-// RUN: %clang_cc1 -triple=lanai-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
-// RUN: %clang_cc1 -triple=m68k-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
-// RUN: %clang_cc1 -triple=mips-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
-// RUN: %clang_cc1 -triple=mips64-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
-// RUN: %clang_cc1 -triple=sparc-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
-// RUN: %clang_cc1 -triple=tce-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT %s
+// RUN: %clang_cc1 -triple=lanai-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT32 %s
+// RUN: %clang_cc1 -triple=m68k-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT32 %s
+// RUN: %clang_cc1 -triple=mips-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT32 %s
+// RUN: %clang_cc1 -triple=mips64-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT64 %s
+// RUN: %clang_cc1 -triple=sparc-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT32 %s
+// RUN: %clang_cc1 -triple=tce-elf %s -emit-llvm -o /dev/null -fdump-record-layouts-simple | FileCheck --check-prefixes CHECK,LAYOUT,LAYOUT32 %s
 
 // Can use tail padding
 struct Pod {
@@ -113,3 +113,45 @@ struct __attribute__((packed)) PNonPod {
 // LAYOUT-DWN32-NEXT: <CGBitFieldInfo Offset:{{[0-9]+}} Size:16 IsSigned:1 StorageSize:16 StorageOffset:0
 // LAYOUT-DWN32-NEXT: <CGBitFieldInfo Offset:{{[0-9]+}} Size:8 IsSigned:1 StorageSize:8 StorageOffset:2
 // CHECK-NEXT: ]>
+
+struct __attribute__((aligned(4))) Empty {} empty;
+
+struct Char { char a; } cbase;
+struct D : virtual Char {
+  [[no_unique_address]] Empty e0;
+  [[no_unique_address]] Empty e1;
+  unsigned a : 24; // keep as 24bits
+} d;
+// CHECK-LABEL: LLVMType:%struct.D =
+// LAYOUT64-SAME: type <{ ptr, [3 x i8], %struct.Char, [4 x i8] }>
+// LAYOUT32-SAME: type { ptr, [3 x i8], %struct.Char }
+// LAYOUT-DWN32-SAME: type { ptr, [3 x i8], %struct.Char }
+// CHECK-NEXT: NonVirtualBaseLLVMType:
+// LAYOUT64-SAME: %struct.D.base = type <{ ptr, i32 }>
+// LAYOUT32-SAME: %struct.D = type { ptr, [3 x i8], %struct.Char }
+// LAYOUT-DWN32-SAME: %struct.D = type { ptr, [3 x i8], %struct.Char }
+// CHECK: BitFields:[
+// LAYOUT-NEXT: <CGBitFieldInfo Offset:{{[0-9]+}} Size:24 IsSigned:0 StorageSize:24 StorageOffset:{{(4|8)}}
+
+// LAYOUT-DWN32-NEXT: <CGBitFieldInfo Offset:{{[0-9]+}} Size:24 IsSigned:0 StorageSize:24 StorageOffset:{{(4|8)}}
+// CHECK-NEXT: ]>
+
+struct Int { int a; } ibase;
+struct E : virtual Int {
+  [[no_unique_address]] Empty e0;
+  [[no_unique_address]] Empty e1;
+  unsigned a : 24; // expand to 32
+} e;
+// CHECK-LABEL: LLVMType:%struct.E =
+// LAYOUT64-SAME: type <{ ptr, i32, %struct.Int }>
+// LAYOUT32-SAME: type { ptr, i32, %struct.Int }
+// LAYOUT-DWN32-SAME: type { ptr, i32, %struct.Int }
+// CHECK-NEXT: NonVirtualBaseLLVMType:%struct.E.base =
+// LAYOUT64-SAME: type <{ ptr, i32 }>
+// LAYOUT32-SAME: type { ptr, i32 }
+// LAYOUT-DWN32-SAME: type { ptr, i32 }
+// CHECK: BitFields:[
+// LAYOUT-NEXT: <CGBitFieldInfo Offset:{{[0-9]+}} Size:24 IsSigned:0 StorageSize:32 StorageOffset:{{(4|8)}}
+
+// LAYOUT-DWN32-NEXT: <CGBitFieldInfo Offset:{{[0-9]+}} Size:24 IsSigned:0 StorageSize:32 StorageOffset:{{(4|8)}}
+// CHECK-NEXT: ]>

@urnathan urnathan merged commit ee99475 into llvm:main Apr 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
clang:codegen IR generation bugs: mangling, exceptions, etc. clang Clang issues not falling into any other category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

ICE due to new bitfield algorithm (PR65742)
3 participants