Skip to content

Reduce llvm-gsymutil memory usage (lambda-free, and less locking) #97640

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

kevinfrei
Copy link
Contributor

@kevinfrei kevinfrei commented Jul 3, 2024

Previous PR exposed a compiler issue with the PPC64LE build, due to lambdas (which I had been warned about!) so it was reverted.

I spent an hour thinking about what @dwblaikie had mentioned (using a single recursive lock) and realized that what he suggested works fine with an almost immeasurable performance hit (Over 20 runs, the average speed was about 0.2% slower with the single, recursive lock, which seems like noise rather than a performance hit).

The end result is a much smaller diff (literally only adding the lock_guard's in the two places it's needed) plus the changes to gsymutil.

From the original PR:
llvm-gsymutil eats a lot of RAM. On some large binaries, it causes OOM's on smaller hardware, consuming well over 64GB of RAM. This change frees line tables once we're done with them, and frees DWARFUnits's DIE's when we finish processing each DU, though they may get reconstituted if there are references from other DU's during processing. Once the conversion is complete, all DIE's are freed. The reduction in peak memory usage from these changes showed between 7-12% in my tests.

The mutex is there to prevent freeing of the DIE arrays while they're being extracted. It needs to be a recursive mutex as there's a possible (and sometimes taken) recursive path through the final section of the code (determineStringOffsetsTableContribution) that may result in a call to this function.

Copy link

github-actions bot commented Jul 3, 2024

⚠️ We detected that you are using a GitHub private e-mail address to contribute to the repo.
Please turn off Keep my email addresses private setting in your account.
See LLVM Discourse for more information.

@kevinfrei
Copy link
Contributor Author

⚠️ We detected that you are using a GitHub private e-mail address to contribute to the repo. Please turn off Keep my email addresses private setting in your account. See LLVM Discourse for more information.

I wonder how that happened. Fixed...

@kevinfrei kevinfrei marked this pull request as ready for review July 3, 2024 21:28
@llvmbot
Copy link
Member

llvmbot commented Jul 3, 2024

@llvm/pr-subscribers-mc

@llvm/pr-subscribers-debuginfo

Author: Kevin Frei (kevinfrei)

Changes

Previous PR exposed a compiler issue with the PPC64LE build, due to lambdas (which I had been warned about!) so it was reverted.

I spent an hour thinking about what @dwblaikie had mentioned (using a single recursive lock) and realized that what he suggested works fine with an almost immeasurable performance hit (Over 20 runs, the average speed was about 0.2% slower with the single, recursive lock, which seems like noise rather than a performance hit).

The end result is a much smaller diff (literally only adding the lock_guard's in the two places it's needed) plus the changes to gsymutil.

From the original PR:
llvm-gsymutil eats a lot of RAM. On some large binaries, it causes OOM's on smaller hardware, consuming well over 64GB of RAM. This change frees line tables once we're done with them, and frees DWARFUnits's DIE's when we finish processing each DU, though they may get reconstituted if there are references from other DU's during processing. Once the conversion is complete, all DIE's are freed. The reduction in peak memory usage from these changes showed between 7-12% in my tests.

The mutex is there to prevent freeing of the DIE arrays while they're being extracted. It needs to be a recursive mutex as there's a possible (and sometimes taken) recursive path through the final section of the code (determineStringOffsetsTableContribution) that may result in a call to this function.


Full diff: https://github.com/llvm/llvm-project/pull/97640.diff

3 Files Affected:

  • (modified) llvm/include/llvm/DebugInfo/DWARF/DWARFUnit.h (+6-3)
  • (modified) llvm/lib/DebugInfo/DWARF/DWARFUnit.cpp (+10)
  • (modified) llvm/lib/DebugInfo/GSYM/DwarfTransformer.cpp (+14-1)
diff --git a/llvm/include/llvm/DebugInfo/DWARF/DWARFUnit.h b/llvm/include/llvm/DebugInfo/DWARF/DWARFUnit.h
index 80c27aea89312..be3c4fe7c4b19 100644
--- a/llvm/include/llvm/DebugInfo/DWARF/DWARFUnit.h
+++ b/llvm/include/llvm/DebugInfo/DWARF/DWARFUnit.h
@@ -27,6 +27,7 @@
 #include <cstdint>
 #include <map>
 #include <memory>
+#include <mutex>
 #include <set>
 #include <utility>
 #include <vector>
@@ -257,6 +258,8 @@ class DWARFUnit {
 
   std::shared_ptr<DWARFUnit> DWO;
 
+  mutable std::recursive_mutex FreeDIEsMutex;
+
 protected:
   friend dwarf_linker::parallel::CompileUnit;
 
@@ -566,6 +569,9 @@ class DWARFUnit {
 
   Error tryExtractDIEsIfNeeded(bool CUDieOnly);
 
+  /// clearDIEs - Clear parsed DIEs to keep memory usage low.
+  void clearDIEs(bool KeepCUDie);
+
 private:
   /// Size in bytes of the .debug_info data associated with this compile unit.
   size_t getDebugInfoSize() const {
@@ -581,9 +587,6 @@ class DWARFUnit {
   void extractDIEsToVector(bool AppendCUDie, bool AppendNonCUDIEs,
                            std::vector<DWARFDebugInfoEntry> &DIEs) const;
 
-  /// clearDIEs - Clear parsed DIEs to keep memory usage low.
-  void clearDIEs(bool KeepCUDie);
-
   /// parseDWO - Parses .dwo file for current compile unit. Returns true if
   /// it was actually constructed.
   /// The \p AlternativeLocation specifies an alternative location to get
diff --git a/llvm/lib/DebugInfo/DWARF/DWARFUnit.cpp b/llvm/lib/DebugInfo/DWARF/DWARFUnit.cpp
index bdd04b00f557b..4f329f05a48cf 100644
--- a/llvm/lib/DebugInfo/DWARF/DWARFUnit.cpp
+++ b/llvm/lib/DebugInfo/DWARF/DWARFUnit.cpp
@@ -496,6 +496,12 @@ void DWARFUnit::extractDIEsIfNeeded(bool CUDieOnly) {
 }
 
 Error DWARFUnit::tryExtractDIEsIfNeeded(bool CUDieOnly) {
+  // Acquire the FreeDIEsMutex recursive lock to prevent a different thread
+  // from freeing the DIE arrays while they're being extracted. It needs to
+  // be recursive, as there is a potentially recursive path through
+  // determineStringOffsetsTableContribution.
+  std::lock_guard<std::recursive_mutex> FreeLock(FreeDIEsMutex);
+
   if ((CUDieOnly && !DieArray.empty()) ||
       DieArray.size() > 1)
     return Error::success(); // Already parsed.
@@ -653,6 +659,10 @@ bool DWARFUnit::parseDWO(StringRef DWOAlternativeLocation) {
 }
 
 void DWARFUnit::clearDIEs(bool KeepCUDie) {
+  // We need to acquire the FreeDIEsMutex lock in write-mode, because we are
+  // going to free the DIEs, when other threads might be trying to create them.
+  llvm::sys::ScopedWriter FreeLock(FreeDIEsMutex);
+
   // Do not use resize() + shrink_to_fit() to free memory occupied by dies.
   // shrink_to_fit() is a *non-binding* request to reduce capacity() to size().
   // It depends on the implementation whether the request is fulfilled.
diff --git a/llvm/lib/DebugInfo/GSYM/DwarfTransformer.cpp b/llvm/lib/DebugInfo/GSYM/DwarfTransformer.cpp
index 601686fdd3dd5..e1b30648b6a77 100644
--- a/llvm/lib/DebugInfo/GSYM/DwarfTransformer.cpp
+++ b/llvm/lib/DebugInfo/GSYM/DwarfTransformer.cpp
@@ -587,6 +587,11 @@ Error DwarfTransformer::convert(uint32_t NumThreads, OutputAggregator &Out) {
       DWARFDie Die = getDie(*CU);
       CUInfo CUI(DICtx, dyn_cast<DWARFCompileUnit>(CU.get()));
       handleDie(Out, CUI, Die);
+      // Release the line table, once we're done.
+      DICtx.clearLineTableForUnit(CU.get());
+      // Free any DIEs that were allocated by the DWARF parser.
+      // If/when they're needed by other CU's, they'll be recreated.
+      CU->clearDIEs(/*KeepCUDie=*/false);
     }
   } else {
     // LLVM Dwarf parser is not thread-safe and we need to parse all DWARF up
@@ -612,11 +617,16 @@ Error DwarfTransformer::convert(uint32_t NumThreads, OutputAggregator &Out) {
       DWARFDie Die = getDie(*CU);
       if (Die) {
         CUInfo CUI(DICtx, dyn_cast<DWARFCompileUnit>(CU.get()));
-        pool.async([this, CUI, &LogMutex, &Out, Die]() mutable {
+        pool.async([this, CUI, &CU, &LogMutex, &Out, Die]() mutable {
           std::string storage;
           raw_string_ostream StrStream(storage);
           OutputAggregator ThreadOut(Out.GetOS() ? &StrStream : nullptr);
           handleDie(ThreadOut, CUI, Die);
+          // Release the line table once we're done.
+          DICtx.clearLineTableForUnit(CU.get());
+          // Free any DIEs that were allocated by the DWARF parser.
+          // If/when they're needed by other CU's, they'll be recreated.
+          CU->clearDIEs(/*KeepCUDie=*/false);
           // Print ThreadLogStorage lines into an actual stream under a lock
           std::lock_guard<std::mutex> guard(LogMutex);
           if (Out.GetOS()) {
@@ -629,6 +639,9 @@ Error DwarfTransformer::convert(uint32_t NumThreads, OutputAggregator &Out) {
     }
     pool.wait();
   }
+  // Now get rid of all the DIEs that may have been recreated
+  for (const auto &CU : DICtx.compile_units())
+    CU->clearDIEs(/*KeepCUDie=*/false);
   size_t FunctionsAddedCount = Gsym.getNumFunctionInfos() - NumBefore;
   Out << "Loaded " << FunctionsAddedCount << " functions from DWARF.\n";
   return Error::success();

@kevinfrei kevinfrei force-pushed the gsym-locking branch 2 times, most recently from 20eb6e4 to 4f36e98 Compare July 8, 2024 19:07
Copy link
Collaborator

@dwblaikie dwblaikie left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, now this is a bit simpler - how bad would it be if it indirected through the DWARFContext & used the existing context-level locking? (perhaps it's a losing battle to try to avoid the threading complexity leaking out of the ThreadSafeDWARFContext, but I'll at least ask)

Like if DWARFContext had a somewhat awkward "doThisThingThreadSafely(function_ref<void()>)" and these APIs called that, which held the lock and did the work (or with the non-thread-safe DWARFContext, just called back immediately)

@@ -653,6 +659,10 @@ bool DWARFUnit::parseDWO(StringRef DWOAlternativeLocation) {
}

void DWARFUnit::clearDIEs(bool KeepCUDie) {
// We need to acquire the FreeDIEsMutex lock in write-mode, because we are
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment is out of date - the recursive_mutex isn't shared, so there's no "write-mode" at work here, I think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops: That slipped through. I'll update it.

@kevinfrei
Copy link
Contributor Author

Hmm, now this is a bit simpler - how bad would it be if it indirected through the DWARFContext & used the existing context-level locking? (perhaps it's a losing battle to try to avoid the threading complexity leaking out of the ThreadSafeDWARFContext, but I'll at least ask)

Like if DWARFContext had a somewhat awkward "doThisThingThreadSafely(function_ref<void()>)" and these APIs called that, which held the lock and did the work (or with the non-thread-safe DWARFContext, just called back immediately)

That makes my inner grumpy-old-man want to start yelling about kids-these-days doing crazy-things.

So the user needs to know if they're operating in a thread-safe environment or not, and has to make sure that anything being called is also thread-safe. That seems super dangerous. That said, I feel like you're concerned about single-threaded performance of this code, but from what I've seen it's over the top IO bound. Given your experience in the codebase, I'm gonna go try to see if I can do what you're describing, but I'd really appreciate more color regarding why you're so worried about the performance of this single lock. I feel like I'm missing some scenario that should be more obvious to me.

@dwblaikie
Copy link
Collaborator

Hmm, now this is a bit simpler - how bad would it be if it indirected through the DWARFContext & used the existing context-level locking? (perhaps it's a losing battle to try to avoid the threading complexity leaking out of the ThreadSafeDWARFContext, but I'll at least ask)
Like if DWARFContext had a somewhat awkward "doThisThingThreadSafely(function_ref<void()>)" and these APIs called that, which held the lock and did the work (or with the non-thread-safe DWARFContext, just called back immediately)

That makes my inner grumpy-old-man want to start yelling about kids-these-days doing crazy-things.

Lambda/functor passing idioms aren't especially uncommon in the LLVM codebase...

So the user needs to know if they're operating in a thread-safe environment or not, and has to make sure that anything being called is also thread-safe.

Not sure I'm following/understanding this characterization... Perhaps there's some miscommunication about what I was suggesting.

That said, I feel like you're concerned about single-threaded performance of this code, but from what I've seen it's over the top IO bound.

I don't imagine my proposal will substanitially effect single threaded performance, and I'm not suggesting it out of a motivation to improve said performance.

My path to the suggestion was: "I wonder if we already use recursive_mutex in LLVM, searches, oh, we use it a bunch in libDebugInfoDWARF already, oh, right, we centralized it in the ThreadSafeDWARFContext to avoid complicating other parts of libDebugInfoDWARF with thread safety concerns -> I wonder if this new concern can be factored into that existing design for the same reasons, so that thread safety concerns are relatively isolated/centralized"

@kevinfrei
Copy link
Contributor Author

we centralized it in the ThreadSafeDWARFContext to avoid complicating other parts of libDebugInfoDWARF with thread safety concerns -> I wonder if this new concern can be factored into that existing design for the same reasons, so that thread safety concerns are relatively isolated/centralized

Got it! I stumbled across this stuff last night. It still triggers my threading anti-pattern eye-twitch, but the system is designed on top of that, so 🤷. I'll refactor/hoist the gsym usage out to use this interface instead.

@dwblaikie
Copy link
Collaborator

we centralized it in the ThreadSafeDWARFContext to avoid complicating other parts of libDebugInfoDWARF with thread safety concerns -> I wonder if this new concern can be factored into that existing design for the same reasons, so that thread safety concerns are relatively isolated/centralized

Got it! I stumbled across this stuff last night. It still triggers my threading anti-pattern eye-twitch, but the system is designed on top of that, so 🤷. I'll refactor/hoist the gsym usage out to use this interface instead.

Yeah, none of this is gospel, FWIW - this was just our best guess at the time given that in general the libDebugInfoDWARF APIs suffer from a lot of inter-dependent interactions and a fairly wide API and so there's no doubt lots of uses that continue to be thread unsafe, so I partly didn't want us to try to guarantee general thread safety when it'd be so impractical given the design.

At some point it'd probably be worth reconsidering the whole design/throw it out and see what starting again looks like - though, equally, at some point, it'd be nice to merge most of these APIs with the LLDB versions (long ago the LLVM one was created as a fork of the LLDB one, and they've since diverged, contain a lot of duplication, etc). Not sure if it makes sense to do a major/invasive rewrite at the /same/ time as merging, or if they should be kept separate.

I guess LLDB would probably enjoy some of the benefits of parallelism in parsing DWARF (maybe - mostly it does it so lazily that there's little to parallelize at any one point in time - except when the input doesn't have an index, in which case it parses it all/indexes up front, but that's not the main case)

@kevinfrei
Copy link
Contributor Author

I guess LLDB would probably enjoy some of the benefits of parallelism in parsing DWARF (maybe - mostly it does it so lazily that there's little to parallelize at any one point in time - except when the input doesn't have an index, in which case it parses it all/indexes up front, but that's not the main case)

Honestly, the most pleasant way I've seen deeply interactive systmes like LLDB do things in both a parallel & lazy fashion is by using coroutines (async/await). I really wouldn't suggest trying to build something that isn't fully supported by the language & runtime. So, maybe when LLVM/LLDB bumps the language standard to C++20?

anyway...

I spent about a day chasing the constantly growing list of DWARFUnit functions that might call extractDIEsIfNeeded just from inside the gsymutil scenario and hoisting them to the DWARFContext API before declaring it a total loss. I'm not particularly well informed about the general data schema of this stuff, but GSYM transformation is operating at the DWARFUnit level, and trying to make the DIE creation/destruction thread-safe at the DWARFContext level just doesn't 'fit' with the abstraction layers of the system at all :(

I think the single recursive lock is the best you're going to get with the current model.

@kevinfrei kevinfrei force-pushed the gsym-locking branch 2 times, most recently from 21d2c0a to 4496a42 Compare July 17, 2024 18:47
@dwblaikie
Copy link
Collaborator

I guess LLDB would probably enjoy some of the benefits of parallelism in parsing DWARF (maybe - mostly it does it so lazily that there's little to parallelize at any one point in time - except when the input doesn't have an index, in which case it parses it all/indexes up front, but that's not the main case)

Honestly, the most pleasant way I've seen deeply interactive systmes like LLDB do things in both a parallel & lazy fashion is by using coroutines (async/await). I really wouldn't suggest trying to build something that isn't fully supported by the language & runtime. So, maybe when LLVM/LLDB bumps the language standard to C++20?

Yeah, perhaps.

anyway...

I spent about a day chasing the constantly growing list of DWARFUnit functions that might call extractDIEsIfNeeded just from inside the gsymutil scenario and hoisting them to the DWARFContext API before declaring it a total loss. I'm not particularly well informed about the general data schema of this stuff, but GSYM transformation is operating at the DWARFUnit level, and trying to make the DIE creation/destruction thread-safe at the DWARFContext level just doesn't 'fit' with the abstraction layers of the system at all :(

Ah, I was thinking of something jankier. Just the code/locking you already have, essentially, but calling into DWARFContext with a lambda to do the work under the lock DWARFContext has. Yeah, not great, but does mean not introducing more locks/broadening the scope of "where to look" to understanding the locking scheme, to some extent.

Does that make sense?

@llvmbot llvmbot added the mc Machine (object) code label Sep 25, 2024
Copy link

github-actions bot commented Sep 25, 2024

✅ With the latest revision this PR passed the C/C++ code formatter.

@kevinfrei
Copy link
Contributor Author

Gack: Mixed a couple of diffs together. Ignore that last change. It will be removed...

llvm-gsymutil eats a lot of RAM. On some large binaries, it causes OOM's on smaller hardware, consuming well over 64GB of RAM.
This change frees line tables once we're done with them, and frees DWARFUnits's DIE's when we finish processing each DU, though
they may get reconstituted if there are references from other DU's during processing. Once the conversion is complete, all DIE's
are freed. The reduction in peak memory usage from these changes showed between 7-12% in my tests.

There is a recursive mutex necessary to prevent accidental freeing of the DIE arrays while they're being extraced. It needs
to be recursive as there's a recursive path through the final section of the code (determineStringOffsetsTableContribution) that
may result in a call to this function.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
debuginfo mc Machine (object) code
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants