Skip to content

[lldb][plugin] Use counter directly for number of readers #139252

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
May 13, 2025

Conversation

jpienaar
Copy link
Member

@jpienaar jpienaar commented May 9, 2025

Here we were initializing & locking a shared_mutex in a thread, while releasing it in the parent which may/often turned out to be a different thread (shared_mutex::unlock_shared is undefined behavior if called from a thread that doesn't hold the lock).

Switch to counter to more simply keep track of number of readers and simply lock/unlock rather than utilizing reader mutex to verify last freed (and so requiring this matching thread init/destroy behavior).

Here we were initializing & locking a mutex in a thread, while releasing it in the parent which may/often turned out to be a different thread (shared_mutex::unlock_shared is undefined behavior if called from a thread that doesn't hold the lock).

I'm not quite sure what the expectation is here as the variable is never used, so instead I've just reset in same thread as which it was set to ensure its freed in thread holding lock.
@jpienaar jpienaar requested a review from labath May 9, 2025 12:23
@jpienaar jpienaar requested a review from JDevlieghere as a code owner May 9, 2025 12:23
@llvmbot llvmbot added the lldb label May 9, 2025
@llvmbot
Copy link
Member

llvmbot commented May 9, 2025

@llvm/pr-subscribers-lldb

Author: Jacques Pienaar (jpienaar)

Changes

Here we were initializing & locking a shared_mutex in a thread, while releasing it in the parent which may/often turned out to be a different thread (shared_mutex::unlock_shared is undefined behavior if called from a thread that doesn't hold the lock).

I'm not quite sure what the expectation is here as the variable is never used, so instead I've just reset in same thread as which it was set to ensure its freed in thread holding lock.


Full diff: https://github.com/llvm/llvm-project/pull/139252.diff

1 Files Affected:

  • (modified) lldb/source/Plugins/SymbolFile/DWARF/ManualDWARFIndex.cpp (+1)
diff --git a/lldb/source/Plugins/SymbolFile/DWARF/ManualDWARFIndex.cpp b/lldb/source/Plugins/SymbolFile/DWARF/ManualDWARFIndex.cpp
index 523820874752a..0f0226ea9650c 100644
--- a/lldb/source/Plugins/SymbolFile/DWARF/ManualDWARFIndex.cpp
+++ b/lldb/source/Plugins/SymbolFile/DWARF/ManualDWARFIndex.cpp
@@ -121,6 +121,7 @@ void ManualDWARFIndex::Index() {
       units_to_index.size());
   for_each_unit([&clear_cu_dies](size_t, size_t idx, DWARFUnit *unit) {
     clear_cu_dies[idx] = unit->ExtractDIEsScoped();
+    ckear_cu_duex[idx].reset();
   });
 
   // Now index all DWARF unit in parallel.

Copy link
Collaborator

@labath labath left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This defeats the purpose of storing the sentinel object -- the goal was to clear it only after performing all the indexing.

I think the ScopedExtractDIEs object needs to be implemented differently. It uses the RWMutex as a counter of active object instances, which is... cute... but not really necessary. Instead of the RW mutex, it could count the number of instances directly (in an integer variable) and then clean up when the count goes to zero. The variable could be protected by a (regular) mutex, and this would only need to be locked while manipulating the variable, so it will always be unlocked/locked on the same thread.

@jpienaar
Copy link
Member Author

jpienaar commented May 12, 2025

I agree a condition variable would work here. I realized this later too (wanted all destroyed at end), one could do that as follows too

// In ManualDWARFIndex
...
   std::vector<DWARFUnit::ScopedExtractDIEs> clear_cu_dies;
    clear_cu_dies.reserve(units_to_index.size());
    for (auto &unit : units_to_index) clear_cu_dies.push_back(*unit);
    for_each_unit([&clear_cu_dies](size_t, size_t idx, DWARFUnit*) {
      clear_cu_dies[idx].Extract();
    });
...

// in DWARFUnit.cpp


void DWARFUnit::ScopedExtractDIEs::Extract() {
  {
    llvm::sys::ScopedReader lock(m_cu->m_die_array_mutex);
    if (!m_cu->m_die_array.empty())
      return; // Already parsed
  }
  llvm::sys::ScopedWriter lock(m_cu->m_die_array_mutex);
  if (!m_cu->m_die_array.empty())
    return; // Already parsed

  // Otherwise m_die_array would be already populated.
  lldbassert(!m_cu->m_cancel_scopes);

  m_cu->ExtractDIEsRWLocked();
  m_clear_dies = true;
}

DWARFUnit::ScopedExtractDIEs DWARFUnit::ExtractDIEsScoped() {
  ScopedExtractDIEs scoped(*this);
  scoped.Extract();
  return scoped;
}

But if plain counter preferred, will switch to that.

@jpienaar jpienaar requested a review from labath May 12, 2025 13:20
@labath
Copy link
Collaborator

labath commented May 12, 2025

But if plain counter preferred, will switch to that.

That is the first idea that crossed my mind, I'm not saying its the best one. I'm not really sure how would a condition variable help here (like, you still need some kind of a counter to trigger the condition), but I'm open to other approaches.

Copy link
Collaborator

@labath labath left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks.

Was flakey before this (1 in 15 in one test) and timeout seems to
suffice in not returning none here.
@jpienaar jpienaar changed the title [lldb][plugin] Clear in same thread as set [lldb][plugin] Use counter directly for number of readers May 13, 2025
@jpienaar jpienaar merged commit c78e65c into main May 13, 2025
8 of 9 checks passed
@jpienaar jpienaar deleted the users/jpienaar/unlocked_shared_lock_other_thread branch May 13, 2025 08:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants