-
Notifications
You must be signed in to change notification settings - Fork 14.3k
[scudo] Remove unused field in BatchGroup #109322
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Also fix the logic while determining the size of BatchClass and update some legacy comments.
@llvm/pr-subscribers-compiler-rt-sanitizer Author: None (ChiaHungDuan) ChangesAlso fix the logic while determining the size of BatchClass and update some legacy comments. Full diff: https://github.com/llvm/llvm-project/pull/109322.diff 3 Files Affected:
diff --git a/compiler-rt/lib/scudo/standalone/allocator_common.h b/compiler-rt/lib/scudo/standalone/allocator_common.h
index 2b77516ad11cae..0169601f9c6254 100644
--- a/compiler-rt/lib/scudo/standalone/allocator_common.h
+++ b/compiler-rt/lib/scudo/standalone/allocator_common.h
@@ -75,16 +75,13 @@ template <class SizeClassAllocator> struct BatchGroup {
BatchGroup *Next;
// The compact base address of each group
uptr CompactPtrGroupBase;
- // Cache value of SizeClassAllocatorLocalCache::getMaxCached()
- u16 MaxCachedPerBatch;
- // Number of blocks pushed into this group. This is an increment-only
- // counter.
- uptr PushedBlocks;
// This is used to track how many bytes are not in-use since last time we
// tried to release pages.
uptr BytesInBGAtLastCheckpoint;
// Blocks are managed by TransferBatch in a list.
SinglyLinkedList<TransferBatch<SizeClassAllocator>> Batches;
+ // Cache value of SizeClassAllocatorLocalCache::getMaxCached()
+ u16 MaxCachedPerBatch;
};
} // namespace scudo
diff --git a/compiler-rt/lib/scudo/standalone/primary32.h b/compiler-rt/lib/scudo/standalone/primary32.h
index 9ef5e7102e3220..654b129d9f5471 100644
--- a/compiler-rt/lib/scudo/standalone/primary32.h
+++ b/compiler-rt/lib/scudo/standalone/primary32.h
@@ -56,12 +56,9 @@ template <typename Config> class SizeClassAllocator32 {
typedef TransferBatch<ThisT> TransferBatchT;
typedef BatchGroup<ThisT> BatchGroupT;
- static_assert(sizeof(BatchGroupT) <= sizeof(TransferBatchT),
- "BatchGroupT uses the same class size as TransferBatchT");
-
static uptr getSizeByClassId(uptr ClassId) {
return (ClassId == SizeClassMap::BatchClassId)
- ? sizeof(TransferBatchT)
+ ? Max(sizeof(BatchGroupT), sizeof(TransferBatchT))
: SizeClassMap::getSizeByClassId(ClassId);
}
@@ -531,9 +528,6 @@ template <typename Config> class SizeClassAllocator32 {
// BatchClass hasn't enabled memory group. Use `0` to indicate there's no
// memory group here.
BG->CompactPtrGroupBase = 0;
- // `BG` is also the block of BatchClassId. Note that this is different
- // from `CreateGroup` in `pushBlocksImpl`
- BG->PushedBlocks = 1;
BG->BytesInBGAtLastCheckpoint = 0;
BG->MaxCachedPerBatch =
CacheT::getMaxCached(getSizeByClassId(SizeClassMap::BatchClassId));
@@ -558,9 +552,6 @@ template <typename Config> class SizeClassAllocator32 {
TB->add(
compactPtr(SizeClassMap::BatchClassId, reinterpret_cast<uptr>(BG)));
--Size;
- DCHECK_EQ(BG->PushedBlocks, 1U);
- // `TB` is also the block of BatchClassId.
- BG->PushedBlocks += 1;
BG->Batches.push_front(TB);
}
@@ -587,8 +578,6 @@ template <typename Config> class SizeClassAllocator32 {
CurBatch->appendFromArray(&Array[I], AppendSize);
I += AppendSize;
}
-
- BG->PushedBlocks += Size;
}
// Push the blocks to their batch group. The layout will be like,
//
@@ -624,7 +613,6 @@ template <typename Config> class SizeClassAllocator32 {
BG->CompactPtrGroupBase = CompactPtrGroupBase;
BG->Batches.push_front(TB);
- BG->PushedBlocks = 0;
BG->BytesInBGAtLastCheckpoint = 0;
BG->MaxCachedPerBatch = TransferBatchT::MaxNumCached;
@@ -652,8 +640,6 @@ template <typename Config> class SizeClassAllocator32 {
CurBatch->appendFromArray(&Array[I], AppendSize);
I += AppendSize;
}
-
- BG->PushedBlocks += Size;
};
Sci->FreeListInfo.PushedBlocks += Size;
diff --git a/compiler-rt/lib/scudo/standalone/primary64.h b/compiler-rt/lib/scudo/standalone/primary64.h
index a3b6e309ed3fce..a3876479ec8158 100644
--- a/compiler-rt/lib/scudo/standalone/primary64.h
+++ b/compiler-rt/lib/scudo/standalone/primary64.h
@@ -61,12 +61,12 @@ template <typename Config> class SizeClassAllocator64 {
typedef TransferBatch<ThisT> TransferBatchT;
typedef BatchGroup<ThisT> BatchGroupT;
- static_assert(sizeof(BatchGroupT) <= sizeof(TransferBatchT),
- "BatchGroupT uses the same class size as TransferBatchT");
-
+ // BachClass is used to store internal metadata so it needs to be at least as
+ // large as the largest data structure.
static uptr getSizeByClassId(uptr ClassId) {
return (ClassId == SizeClassMap::BatchClassId)
- ? roundUp(sizeof(TransferBatchT), 1U << CompactPtrScale)
+ ? roundUp(Max(sizeof(TransferBatchT), sizeof(BatchGroupT)),
+ 1U << CompactPtrScale)
: SizeClassMap::getSizeByClassId(ClassId);
}
@@ -236,7 +236,7 @@ template <typename Config> class SizeClassAllocator64 {
} else {
while (true) {
// When two threads compete for `Region->MMLock`, we only want one of
- // them to call populateFreeListAndPopBatch(). To avoid both of them
+ // them to call populateFreeListAndPopBlocks(). To avoid both of them
// doing that, always check the freelist before mapping new pages.
ScopedLock ML(Region->MMLock);
{
@@ -690,9 +690,6 @@ template <typename Config> class SizeClassAllocator64 {
// BatchClass hasn't enabled memory group. Use `0` to indicate there's no
// memory group here.
BG->CompactPtrGroupBase = 0;
- // `BG` is also the block of BatchClassId. Note that this is different
- // from `CreateGroup` in `pushBlocksImpl`
- BG->PushedBlocks = 1;
BG->BytesInBGAtLastCheckpoint = 0;
BG->MaxCachedPerBatch =
CacheT::getMaxCached(getSizeByClassId(SizeClassMap::BatchClassId));
@@ -717,9 +714,6 @@ template <typename Config> class SizeClassAllocator64 {
TB->add(
compactPtr(SizeClassMap::BatchClassId, reinterpret_cast<uptr>(BG)));
--Size;
- DCHECK_EQ(BG->PushedBlocks, 1U);
- // `TB` is also the block of BatchClassId.
- BG->PushedBlocks += 1;
BG->Batches.push_front(TB);
}
@@ -746,8 +740,6 @@ template <typename Config> class SizeClassAllocator64 {
CurBatch->appendFromArray(&Array[I], AppendSize);
I += AppendSize;
}
-
- BG->PushedBlocks += Size;
}
// Push the blocks to their batch group. The layout will be like,
@@ -782,7 +774,6 @@ template <typename Config> class SizeClassAllocator64 {
BG->CompactPtrGroupBase = CompactPtrGroupBase;
BG->Batches.push_front(TB);
- BG->PushedBlocks = 0;
BG->BytesInBGAtLastCheckpoint = 0;
BG->MaxCachedPerBatch = TransferBatchT::MaxNumCached;
@@ -810,8 +801,6 @@ template <typename Config> class SizeClassAllocator64 {
CurBatch->appendFromArray(&Array[I], AppendSize);
I += AppendSize;
}
-
- BG->PushedBlocks += Size;
};
Region->FreeListInfo.PushedBlocks += Size;
@@ -884,7 +873,7 @@ template <typename Config> class SizeClassAllocator64 {
while (true) {
// We only expect one thread doing the freelist refillment and other
// threads will be waiting for either the completion of the
- // `populateFreeListAndPopBatch()` or `pushBlocks()` called by other
+ // `populateFreeListAndPopBlocks()` or `pushBlocks()` called by other
// threads.
bool PopulateFreeList = false;
{
@@ -922,7 +911,7 @@ template <typename Config> class SizeClassAllocator64 {
// At here, there are two preconditions to be met before waiting,
// 1. The freelist is empty.
// 2. Region->isPopulatingFreeList == true, i.e, someone is still doing
- // `populateFreeListAndPopBatch()`.
+ // `populateFreeListAndPopBlocks()`.
//
// Note that it has the chance that freelist is empty but
// Region->isPopulatingFreeList == false because all the new populated
@@ -938,8 +927,8 @@ template <typename Config> class SizeClassAllocator64 {
// Now the freelist is empty and someone's doing the refillment. We will
// wait until anyone refills the freelist or someone finishes doing
- // `populateFreeListAndPopBatch()`. The refillment can be done by
- // `populateFreeListAndPopBatch()`, `pushBlocks()`,
+ // `populateFreeListAndPopBlocks()`. The refillment can be done by
+ // `populateFreeListAndPopBlocks()`, `pushBlocks()`,
// `pushBatchClassBlocks()` and `mergeGroupsToReleaseBack()`.
Region->FLLockCV.wait(Region->FLLock);
@@ -1119,8 +1108,8 @@ template <typename Config> class SizeClassAllocator64 {
// Note that `PushedBlocks` and `PoppedBlocks` are supposed to only record
// the requests from `PushBlocks` and `PopBatch` which are external
- // interfaces. `populateFreeListAndPopBatch` is the internal interface so we
- // should set the values back to avoid incorrectly setting the stats.
+ // interfaces. `populateFreeListAndPopBlocks` is the internal interface so
+ // we should set the values back to avoid incorrectly setting the stats.
Region->FreeListInfo.PushedBlocks -= NumberOfBlocks;
const uptr AllocatedUser = Size * NumberOfBlocks;
@@ -1689,7 +1678,6 @@ template <typename Config> class SizeClassAllocator64 {
GroupsToRelease.pop_front();
if (BG->CompactPtrGroupBase == Cur->CompactPtrGroupBase) {
- BG->PushedBlocks += Cur->PushedBlocks;
// We have updated `BatchGroup::BytesInBGAtLastCheckpoint` while
// collecting the `GroupsToRelease`.
BG->BytesInBGAtLastCheckpoint = Cur->BytesInBGAtLastCheckpoint;
|
cferris1000
approved these changes
Sep 19, 2024
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
tmsri
pushed a commit
to tmsri/llvm-project
that referenced
this pull request
Sep 19, 2024
Also fix the logic while determining the size of BatchClass and update some legacy comments.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Also fix the logic while determining the size of BatchClass and update some legacy comments.