-
Notifications
You must be signed in to change notification settings - Fork 14.3k
[lldb][plugin] Use counter directly for number of readers #139252
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[lldb][plugin] Use counter directly for number of readers #139252
Conversation
Here we were initializing & locking a mutex in a thread, while releasing it in the parent which may/often turned out to be a different thread (shared_mutex::unlock_shared is undefined behavior if called from a thread that doesn't hold the lock). I'm not quite sure what the expectation is here as the variable is never used, so instead I've just reset in same thread as which it was set to ensure its freed in thread holding lock.
@llvm/pr-subscribers-lldb Author: Jacques Pienaar (jpienaar) ChangesHere we were initializing & locking a shared_mutex in a thread, while releasing it in the parent which may/often turned out to be a different thread (shared_mutex::unlock_shared is undefined behavior if called from a thread that doesn't hold the lock). I'm not quite sure what the expectation is here as the variable is never used, so instead I've just reset in same thread as which it was set to ensure its freed in thread holding lock. Full diff: https://github.com/llvm/llvm-project/pull/139252.diff 1 Files Affected:
diff --git a/lldb/source/Plugins/SymbolFile/DWARF/ManualDWARFIndex.cpp b/lldb/source/Plugins/SymbolFile/DWARF/ManualDWARFIndex.cpp
index 523820874752a..0f0226ea9650c 100644
--- a/lldb/source/Plugins/SymbolFile/DWARF/ManualDWARFIndex.cpp
+++ b/lldb/source/Plugins/SymbolFile/DWARF/ManualDWARFIndex.cpp
@@ -121,6 +121,7 @@ void ManualDWARFIndex::Index() {
units_to_index.size());
for_each_unit([&clear_cu_dies](size_t, size_t idx, DWARFUnit *unit) {
clear_cu_dies[idx] = unit->ExtractDIEsScoped();
+ ckear_cu_duex[idx].reset();
});
// Now index all DWARF unit in parallel.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This defeats the purpose of storing the sentinel object -- the goal was to clear it only after performing all the indexing.
I think the ScopedExtractDIEs object needs to be implemented differently. It uses the RWMutex as a counter of active object instances, which is... cute... but not really necessary. Instead of the RW mutex, it could count the number of instances directly (in an integer variable) and then clean up when the count goes to zero. The variable could be protected by a (regular) mutex, and this would only need to be locked while manipulating the variable, so it will always be unlocked/locked on the same thread.
I agree a condition variable would work here. I realized this later too (wanted all destroyed at end), one could do that as follows too
But if plain counter preferred, will switch to that. |
That is the first idea that crossed my mind, I'm not saying its the best one. I'm not really sure how would a condition variable help here (like, you still need some kind of a counter to trigger the condition), but I'm open to other approaches. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks.
Was flakey before this (1 in 15 in one test) and timeout seems to suffice in not returning none here.
Here we were initializing & locking a shared_mutex in a thread, while releasing it in the parent which may/often turned out to be a different thread (shared_mutex::unlock_shared is undefined behavior if called from a thread that doesn't hold the lock).
Switch to counter to more simply keep track of number of readers and simply lock/unlock rather than utilizing reader mutex to verify last freed (and so requiring this matching thread init/destroy behavior).