Skip to content

[scudo] Refactor store() and retrieve(). #102024

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Aug 6, 2024

Conversation

JoshuaMBa
Copy link
Contributor

store() and retrieve() have been refactored so that the scudo headers
are abstracted away from cache operations.

@llvmbot
Copy link
Member

llvmbot commented Aug 5, 2024

@llvm/pr-subscribers-compiler-rt-sanitizer

Author: Joshua Baehring (JoshuaMBa)

Changes

store() and retrieve() have been refactored so that the scudo headers
are abstracted away from cache operations.


Full diff: https://github.com/llvm/llvm-project/pull/102024.diff

1 Files Affected:

  • (modified) compiler-rt/lib/scudo/standalone/secondary.h (+81-54)
diff --git a/compiler-rt/lib/scudo/standalone/secondary.h b/compiler-rt/lib/scudo/standalone/secondary.h
index d8505742d6054..9f12f78382102 100644
--- a/compiler-rt/lib/scudo/standalone/secondary.h
+++ b/compiler-rt/lib/scudo/standalone/secondary.h
@@ -65,11 +65,10 @@ template <typename Config> static Header *getHeader(const void *Ptr) {
 
 } // namespace LargeBlock
 
-static inline void unmap(LargeBlock::Header *H) {
-  // Note that the `H->MapMap` is stored on the pages managed by itself. Take
+static inline void unmap(MemMapT &MemMap) {
+  // Note that header `MemMap`s are stored on the pages managed by itself. Take
   // over the ownership before unmap() so that any operation along with unmap()
   // won't touch inaccessible pages.
-  MemMapT MemMap = H->MemMap;
   MemMap.unmap(MemMap.getBase(), MemMap.getCapacity());
 }
 
@@ -96,12 +95,15 @@ struct CachedBlock {
 template <typename Config> class MapAllocatorNoCache {
 public:
   void init(UNUSED s32 ReleaseToOsInterval) {}
-  bool retrieve(UNUSED Options Options, UNUSED uptr Size, UNUSED uptr Alignment,
-                UNUSED uptr HeadersSize, UNUSED LargeBlock::Header **H,
-                UNUSED bool *Zeroed) {
-    return false;
+  CachedBlock retrieve(UNUSED uptr Size, UNUSED uptr Alignment,
+                       UNUSED uptr HeadersSize, UNUSED uptr &EntryHeaderPos) {
+    return {};
   }
-  void store(UNUSED Options Options, LargeBlock::Header *H) { unmap(H); }
+  void store(UNUSED Options Options, uptr CommitBase, uptr CommitSize,
+             uptr BlockBegin, MemMapT MemMap) {
+    unmap(MemMap);
+  }
+
   bool canCache(UNUSED uptr Size) { return false; }
   void disable() {}
   void enable() {}
@@ -239,19 +241,25 @@ template <typename Config> class MapAllocatorCache {
     Entries[Config::getEntriesArraySize() - 1].Next = CachedBlock::InvalidEntry;
   }
 
-  void store(const Options &Options, LargeBlock::Header *H) EXCLUDES(Mutex) {
-    if (!canCache(H->CommitSize))
-      return unmap(H);
+  void store(const Options &Options, uptr CommitBase, uptr CommitSize,
+             uptr BlockBegin, MemMapT MemMap) EXCLUDES(Mutex) {
+
+    CachedBlock Entry;
+    Entry.CommitBase = CommitBase;
+    Entry.CommitSize = CommitSize;
+    Entry.BlockBegin = BlockBegin;
+    Entry.MemMap = MemMap;
+    Entry.Time = UINT64_MAX;
+    store(Options, Entry);
+  }
+
+  void store(const Options &Options, CachedBlock &Entry) EXCLUDES(Mutex) {
+    if (!canCache(Entry.CommitSize))
+      return unmap(Entry.MemMap);
 
     const s32 Interval = atomic_load_relaxed(&ReleaseToOsIntervalMs);
     u64 Time;
-    CachedBlock Entry;
 
-    Entry.CommitBase = H->CommitBase;
-    Entry.CommitSize = H->CommitSize;
-    Entry.BlockBegin = reinterpret_cast<uptr>(H + 1);
-    Entry.MemMap = H->MemMap;
-    Entry.Time = UINT64_MAX;
     if (useMemoryTagging<Config>(Options)) {
       if (Interval == 0 && !SCUDO_FUCHSIA) {
         // Release the memory and make it inaccessible at the same time by
@@ -290,7 +298,7 @@ template <typename Config> class MapAllocatorCache {
         // read Options and when we locked Mutex. We can't insert our entry into
         // the quarantine or the cache because the permissions would be wrong so
         // just unmap it.
-        Entry.MemMap.unmap(Entry.MemMap.getBase(), Entry.MemMap.getCapacity());
+        unmap(Entry.MemMap);
         break;
       }
       if (Config::getQuarantineSize() && useMemoryTagging<Config>(Options)) {
@@ -321,7 +329,7 @@ template <typename Config> class MapAllocatorCache {
     } while (0);
 
     for (MemMapT &EvictMemMap : EvictionMemMaps)
-      EvictMemMap.unmap(EvictMemMap.getBase(), EvictMemMap.getCapacity());
+      unmap(EvictMemMap);
 
     if (Interval >= 0) {
       // TODO: Add ReleaseToOS logic to LRU algorithm
@@ -329,20 +337,20 @@ template <typename Config> class MapAllocatorCache {
     }
   }
 
-  bool retrieve(Options Options, uptr Size, uptr Alignment, uptr HeadersSize,
-                LargeBlock::Header **H, bool *Zeroed) EXCLUDES(Mutex) {
+  CachedBlock retrieve(uptr Size, uptr Alignment, uptr HeadersSize,
+                       uptr &EntryHeaderPos) EXCLUDES(Mutex) {
     const uptr PageSize = getPageSizeCached();
     // 10% of the requested size proved to be the optimal choice for
     // retrieving cached blocks after testing several options.
     constexpr u32 FragmentedBytesDivisor = 10;
     bool Found = false;
     CachedBlock Entry;
-    uptr EntryHeaderPos = 0;
+    EntryHeaderPos = 0;
     {
       ScopedLock L(Mutex);
       CallsToRetrieve++;
       if (EntriesCount == 0)
-        return false;
+        return Entry;
       u32 OptimalFitIndex = 0;
       uptr MinDiff = UINTPTR_MAX;
       for (u32 I = LRUHead; I != CachedBlock::InvalidEntry;
@@ -383,29 +391,8 @@ template <typename Config> class MapAllocatorCache {
         SuccessfulRetrieves++;
       }
     }
-    if (!Found)
-      return false;
 
-    *H = reinterpret_cast<LargeBlock::Header *>(
-        LargeBlock::addHeaderTag<Config>(EntryHeaderPos));
-    *Zeroed = Entry.Time == 0;
-    if (useMemoryTagging<Config>(Options))
-      Entry.MemMap.setMemoryPermission(Entry.CommitBase, Entry.CommitSize, 0);
-    uptr NewBlockBegin = reinterpret_cast<uptr>(*H + 1);
-    if (useMemoryTagging<Config>(Options)) {
-      if (*Zeroed) {
-        storeTags(LargeBlock::addHeaderTag<Config>(Entry.CommitBase),
-                  NewBlockBegin);
-      } else if (Entry.BlockBegin < NewBlockBegin) {
-        storeTags(Entry.BlockBegin, NewBlockBegin);
-      } else {
-        storeTags(untagPointer(NewBlockBegin), untagPointer(Entry.BlockBegin));
-      }
-    }
-    (*H)->CommitBase = Entry.CommitBase;
-    (*H)->CommitSize = Entry.CommitSize;
-    (*H)->MemMap = Entry.MemMap;
-    return true;
+    return Entry;
   }
 
   bool canCache(uptr Size) {
@@ -444,7 +431,7 @@ template <typename Config> class MapAllocatorCache {
     for (u32 I = 0; I != Config::getQuarantineSize(); ++I) {
       if (Quarantine[I].isValid()) {
         MemMapT &MemMap = Quarantine[I].MemMap;
-        MemMap.unmap(MemMap.getBase(), MemMap.getCapacity());
+        unmap(MemMap);
         Quarantine[I].invalidate();
       }
     }
@@ -538,7 +525,7 @@ template <typename Config> class MapAllocatorCache {
     }
     for (uptr I = 0; I < N; I++) {
       MemMapT &MemMap = MapInfo[I];
-      MemMap.unmap(MemMap.getBase(), MemMap.getCapacity());
+      unmap(MemMap);
     }
   }
 
@@ -605,6 +592,9 @@ template <typename Config> class MapAllocator {
 
   void deallocate(const Options &Options, void *Ptr);
 
+  LargeBlock::Header *tryAllocateFromCache(const Options &Options, uptr Size,
+                                           uptr Alignment, uptr HeadersSize,
+                                           bool &Zeroed);
   static uptr getBlockEnd(void *Ptr) {
     auto *B = LargeBlock::getHeader<Config>(Ptr);
     return B->CommitBase + B->CommitSize;
@@ -665,6 +655,40 @@ template <typename Config> class MapAllocator {
   LocalStats Stats GUARDED_BY(Mutex);
 };
 
+template <typename Config>
+LargeBlock::Header *
+MapAllocator<Config>::tryAllocateFromCache(const Options &Options, uptr Size,
+                                           uptr Alignment, uptr HeadersSize,
+                                           bool &Zeroed) {
+  CachedBlock Entry;
+  uptr EntryHeaderPos;
+
+  Entry = Cache.retrieve(Size, Alignment, getHeadersSize(), EntryHeaderPos);
+  if (Entry.isValid()) {
+    LargeBlock::Header *H = reinterpret_cast<LargeBlock::Header *>(
+        LargeBlock::addHeaderTag<Config>(EntryHeaderPos));
+    Zeroed = Entry.Time == 0;
+    if (useMemoryTagging<Config>(Options))
+      Entry.MemMap.setMemoryPermission(Entry.CommitBase, Entry.CommitSize, 0);
+    uptr NewBlockBegin = reinterpret_cast<uptr>(H + 1);
+    if (useMemoryTagging<Config>(Options)) {
+      if (Zeroed) {
+        storeTags(LargeBlock::addHeaderTag<Config>(Entry.CommitBase),
+                  NewBlockBegin);
+      } else if (Entry.BlockBegin < NewBlockBegin) {
+        storeTags(Entry.BlockBegin, NewBlockBegin);
+      } else {
+        storeTags(untagPointer(NewBlockBegin), untagPointer(Entry.BlockBegin));
+      }
+    }
+
+    H->CommitBase = Entry.CommitBase;
+    H->CommitSize = Entry.CommitSize;
+    H->MemMap = Entry.MemMap;
+    return H;
+  }
+  return nullptr;
+}
 // As with the Primary, the size passed to this function includes any desired
 // alignment, so that the frontend can align the user allocation. The hint
 // parameter allows us to unmap spurious memory when dealing with larger
@@ -692,8 +716,10 @@ void *MapAllocator<Config>::allocate(const Options &Options, uptr Size,
   if (Alignment < PageSize && Cache.canCache(MinNeededSizeForCache)) {
     LargeBlock::Header *H;
     bool Zeroed;
-    if (Cache.retrieve(Options, Size, Alignment, getHeadersSize(), &H,
-                       &Zeroed)) {
+
+    H = tryAllocateFromCache(Options, Size, Alignment, getHeadersSize(),
+                             Zeroed);
+    if (H != nullptr) {
       const uptr BlockEnd = H->CommitBase + H->CommitSize;
       if (BlockEndPtr)
         *BlockEndPtr = BlockEnd;
@@ -740,9 +766,9 @@ void *MapAllocator<Config>::allocate(const Options &Options, uptr Size,
   // In the unlikely event of alignments larger than a page, adjust the amount
   // of memory we want to commit, and trim the extra memory.
   if (UNLIKELY(Alignment >= PageSize)) {
-    // For alignments greater than or equal to a page, the user pointer (eg: the
-    // pointer that is returned by the C or C++ allocation APIs) ends up on a
-    // page boundary , and our headers will live in the preceding page.
+    // For alignments greater than or equal to a page, the user pointer (eg:
+    // the pointer that is returned by the C or C++ allocation APIs) ends up
+    // on a page boundary , and our headers will live in the preceding page.
     CommitBase = roundUp(MapBase + PageSize + 1, Alignment) - PageSize;
     const uptr NewMapBase = CommitBase - PageSize;
     DCHECK_GE(NewMapBase, MapBase);
@@ -765,7 +791,7 @@ void *MapAllocator<Config>::allocate(const Options &Options, uptr Size,
   const uptr AllocPos = roundDown(CommitBase + CommitSize - Size, Alignment);
   if (!mapSecondary<Config>(Options, CommitBase, CommitSize, AllocPos, 0,
                             MemMap)) {
-    MemMap.unmap(MemMap.getBase(), MemMap.getCapacity());
+    unmap(MemMap);
     return nullptr;
   }
   const uptr HeaderPos = AllocPos - getHeadersSize();
@@ -807,7 +833,8 @@ void MapAllocator<Config>::deallocate(const Options &Options, void *Ptr)
     Stats.sub(StatAllocated, CommitSize);
     Stats.sub(StatMapped, H->MemMap.getCapacity());
   }
-  Cache.store(Options, H);
+  Cache.store(Options, H->CommitBase, H->CommitSize,
+              reinterpret_cast<uptr>(H + 1), H->MemMap);
 }
 
 template <typename Config>

@JoshuaMBa JoshuaMBa force-pushed the refactor_store_and_retrieve branch 2 times, most recently from 7a16a42 to 5e9552f Compare August 5, 2024 18:09
@JoshuaMBa JoshuaMBa force-pushed the refactor_store_and_retrieve branch from 5e9552f to 1d429ae Compare August 5, 2024 19:11
@JoshuaMBa JoshuaMBa force-pushed the refactor_store_and_retrieve branch from 1d429ae to d653351 Compare August 5, 2024 20:19
@JoshuaMBa JoshuaMBa force-pushed the refactor_store_and_retrieve branch 3 times, most recently from 246b72d to 450d1e0 Compare August 5, 2024 21:49
@JoshuaMBa JoshuaMBa force-pushed the refactor_store_and_retrieve branch from 450d1e0 to 72d4a13 Compare August 5, 2024 22:59
Copy link
Contributor

@ChiaHungDuan ChiaHungDuan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will merge it after @cferris1000's review

@JoshuaMBa JoshuaMBa force-pushed the refactor_store_and_retrieve branch from 72d4a13 to 68f469b Compare August 6, 2024 00:43
store() and retrieve() have been refactored so that the scudo headers
are abstracted away from low-level cache operations.
@JoshuaMBa JoshuaMBa force-pushed the refactor_store_and_retrieve branch from 68f469b to bc4014b Compare August 6, 2024 01:32
Copy link
Contributor

@cferris1000 cferris1000 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ChiaHungDuan ChiaHungDuan merged commit 2336ef9 into llvm:main Aug 6, 2024
6 checks passed
@JoshuaMBa JoshuaMBa deleted the refactor_store_and_retrieve branch August 6, 2024 21:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants