https://github.com/jpienaar created 
https://github.com/llvm/llvm-project/pull/139252

Here we were initializing & locking a shared_mutex in a thread, while releasing 
it in the parent which may/often turned out to be a different thread 
(shared_mutex::unlock_shared is undefined behavior if called from a thread that 
doesn't hold the lock).

I'm not quite sure what the expectation is here as the variable is never used, 
so instead I've just reset in same thread as which it was set to ensure its 
freed in thread holding lock.

>From c5ffbd84f8b68bae2112e8cec68803cefe571a72 Mon Sep 17 00:00:00 2001
From: Jacques Pienaar <jpien...@google.com>
Date: Fri, 9 May 2025 05:23:00 -0700
Subject: [PATCH] [lldb][plugin] Clear in same thread as set

Here we were initializing & locking a mutex in a thread, while releasing it in 
the parent which may/often turned out to be a different thread 
(shared_mutex::unlock_shared is undefined behavior if called from a thread that 
doesn't hold the lock).

I'm not quite sure what the expectation is here as the variable is never used, 
so instead I've just reset in same thread as which it was set to ensure its 
freed in thread holding lock.
---
 lldb/source/Plugins/SymbolFile/DWARF/ManualDWARFIndex.cpp | 1 +
 1 file changed, 1 insertion(+)

diff --git a/lldb/source/Plugins/SymbolFile/DWARF/ManualDWARFIndex.cpp 
b/lldb/source/Plugins/SymbolFile/DWARF/ManualDWARFIndex.cpp
index 523820874752a..0f0226ea9650c 100644
--- a/lldb/source/Plugins/SymbolFile/DWARF/ManualDWARFIndex.cpp
+++ b/lldb/source/Plugins/SymbolFile/DWARF/ManualDWARFIndex.cpp
@@ -121,6 +121,7 @@ void ManualDWARFIndex::Index() {
       units_to_index.size());
   for_each_unit([&clear_cu_dies](size_t, size_t idx, DWARFUnit *unit) {
     clear_cu_dies[idx] = unit->ExtractDIEsScoped();
+    ckear_cu_duex[idx].reset();
   });
 
   // Now index all DWARF unit in parallel.

_______________________________________________
lldb-commits mailing list
lldb-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits

Reply via email to