jasonmolenda created this revision. jasonmolenda added reviewers: jingham, JDevlieghere. jasonmolenda added a project: LLDB. Herald added a project: All. jasonmolenda requested review of this revision. Herald added a subscriber: lldb-commits.
This patch is to address an issue hit by one of our heavy SB API scripting developers; they have some code iterating over a large number of objects in SBValue's and one of the elements of the object has a type of void*; the Itanium ABI is trying to sniff the memory there to see if it might have a dynamic type. In this case, the elements often have the value 0, so lldb is trying to read memory at address 0, which fails, and the number of reads is quite large. Each failing memory read takes long enough in a JTAG like environment that this is a big perf issue. We have a MemoryCache subsystem that saves blocks of inferior memory when we do a read, so repeated reads of the same address, or reads near the address that were cached, are saved at a single point in time. These memory cache buffers are flushed when execution is resumed. This patch adds a new list of addresses we've tried to read from at this execution stop, which returned an error, and will not try to read from those addresses again. If lldb allocates memory in the inferior, or if we resume execution, this list of addresses is flushed. I settled on using an llvm::SmallSet container for these addr_t's, but I'd appreciate if people have a different suggestion for the most appropriate container. I expect this list to be relatively small -- it is not common that lldb tries to read from addresses that are unreadable -- and my primary optimization concern is quick lookup because I will consult this list before I read from any address in memory. When I was outlining my idea for this, Jim pointed out that MemoryCache has a `InvalidRanges m_invalid_ranges` ivar already, and could I reuse this. This is called by the DynamicLoader to mark a region of memory as never accessible (e.g. the lower 4GB on a 64-bit Darwin process), and is not flushed on execution resume / memory allocation. It is expressed in terms of memory ranges, when I don't know the range of memory that is inaccessible beyond an address. I could assume the range extends to the end of a page boundary, if I knew the page boundary size, but that still doesn't address the fact that `m_invalid_ranges` is intended to track inaccessible memory that is invariant during the process lifetime. Repository: rG LLVM Github Monorepo https://reviews.llvm.org/D134906 Files: lldb/include/lldb/Target/Memory.h lldb/source/Target/Memory.cpp lldb/source/Target/Process.cpp lldb/test/API/functionalities/gdb_remote_client/TestMemoryReadFailCache.py
Index: lldb/test/API/functionalities/gdb_remote_client/TestMemoryReadFailCache.py =================================================================== --- /dev/null +++ lldb/test/API/functionalities/gdb_remote_client/TestMemoryReadFailCache.py @@ -0,0 +1,55 @@ +import lldb +from lldbsuite.test.lldbtest import * +from lldbsuite.test.decorators import * +from lldbsuite.test.gdbclientutils import * +from lldbsuite.test.lldbgdbclient import GDBRemoteTestBase + + +class TestMemoryReadFailCache(GDBRemoteTestBase): + + @skipIfXmlSupportMissing + def test(self): + class MyResponder(MockGDBServerResponder): + first_read = True + + def qHostInfo (self): + return "cputype:16777228;cpusubtype:2;addressing_bits:47;ostype:macosx;vendor:apple;os_version:12.6.0;endian:little;ptrsize:8;" + + def readMemory(self, addr, length): + if self.first_read: + self.first_read = False + return "E05" + return "AA" * length + + self.server.responder = MyResponder() + target = self.dbg.CreateTarget('') + if self.TraceOn(): + self.runCmd("log enable gdb-remote packets") + self.addTearDownHook( + lambda: self.runCmd("log disable gdb-remote packets")) + process = self.connect(target) + + err = lldb.SBError() + uint = process.ReadUnsignedFromMemory(0x1000, 4, err) + self.assertTrue (err.Fail(), + "lldb should fail to read memory at 0x100 the first time, " + "after checking with stub") + + # Now the remote stub will return a non-zero number if + # we ask again, unlike a real stub. + # Confirm that reading from that address still fails - + # that we cached the unreadable address + uint = process.ReadUnsignedFromMemory(0x1000, 4, err) + self.assertTrue (err.Fail(), + "lldb should fail to read memory at 0x100 the second time, " + "because we did not read from the remote stub.") + + # Allocate memory in the inferior, which will flush + # the unreadable address cache. + process.AllocateMemory(0x100, 3, err) + + uint = process.ReadUnsignedFromMemory(0x1000, 4, err) + self.assertTrue (err.Success(), + "lldb should succeed to read memory at 0x100 the third time, " + "because we flushed the failed address cache, and the stub is " + "now providing data there.") Index: lldb/source/Target/Process.cpp =================================================================== --- lldb/source/Target/Process.cpp +++ lldb/source/Target/Process.cpp @@ -2266,6 +2266,11 @@ error.SetErrorToGenericError(); return LLDB_INVALID_ADDRESS; } + if (!GetDisableMemoryCache()) { + // Flush unreadable addresses list; allocating a + // block of memory will change what addresse can be accessed. + m_memory_cache.FlushUnreadableAddresses(); + } #if defined(USE_ALLOCATE_MEMORY_CACHE) return m_allocated_memory_cache.AllocateMemory(size, permissions, error); Index: lldb/source/Target/Memory.cpp =================================================================== --- lldb/source/Target/Memory.cpp +++ lldb/source/Target/Memory.cpp @@ -24,7 +24,8 @@ MemoryCache::MemoryCache(Process &process) : m_mutex(), m_L1_cache(), m_L2_cache(), m_invalid_ranges(), m_process(process), - m_L2_cache_line_byte_size(process.GetMemoryCacheLineSize()) {} + m_L2_cache_line_byte_size(process.GetMemoryCacheLineSize()), + m_unreadable_addresses() {} // Destructor MemoryCache::~MemoryCache() = default; @@ -36,6 +37,7 @@ if (clear_invalid_ranges) m_invalid_ranges.Clear(); m_L2_cache_line_byte_size = m_process.GetMemoryCacheLineSize(); + FlushUnreadableAddresses(); } void MemoryCache::AddL1CacheData(lldb::addr_t addr, const void *src, @@ -51,6 +53,8 @@ } void MemoryCache::Flush(addr_t addr, size_t size) { + FlushUnreadableAddresses(); + if (size == 0) return; @@ -149,6 +153,11 @@ } } + if (IsAddressUnreadable(addr)) { + error.SetErrorStringWithFormat("memory read failed for 0x%" PRIx64, addr); + return 0; + } + // If this memory read request is larger than the cache line size, then we // (1) try to read as much of it at once as possible, and (2) don't add the // data to the memory cache. We don't want to split a big read up into more @@ -163,6 +172,8 @@ // anything if (bytes_read > 0) AddL1CacheData(addr, dst, bytes_read); + else + AddUnreadableAddress(addr); return bytes_read; } @@ -224,14 +235,21 @@ // We need to read from the process if (bytes_left > 0) { + if (IsAddressUnreadable(curr_addr)) { + error.SetErrorStringWithFormat("memory read failed for 0x%" PRIx64, + addr); + return dst_len - bytes_left; + } assert((curr_addr % cache_line_byte_size) == 0); std::unique_ptr<DataBufferHeap> data_buffer_heap_up( new DataBufferHeap(cache_line_byte_size, 0)); size_t process_bytes_read = m_process.ReadMemoryFromInferior( curr_addr, data_buffer_heap_up->GetBytes(), data_buffer_heap_up->GetByteSize(), error); - if (process_bytes_read == 0) + if (process_bytes_read == 0) { + AddUnreadableAddress(curr_addr); return dst_len - bytes_left; + } if (process_bytes_read != cache_line_byte_size) { if (process_bytes_read < data_buffer_heap_up->GetByteSize()) { @@ -250,6 +268,16 @@ return dst_len - bytes_left; } +void MemoryCache::AddUnreadableAddress(addr_t address) { + m_unreadable_addresses.insert(address); +} + +bool MemoryCache::IsAddressUnreadable(addr_t address) { + return m_unreadable_addresses.count(address) > 0; +} + +void MemoryCache::FlushUnreadableAddresses() { m_unreadable_addresses.clear(); } + AllocatedBlock::AllocatedBlock(lldb::addr_t addr, uint32_t byte_size, uint32_t permissions, uint32_t chunk_size) : m_range(addr, byte_size), m_permissions(permissions), Index: lldb/include/lldb/Target/Memory.h =================================================================== --- lldb/include/lldb/Target/Memory.h +++ lldb/include/lldb/Target/Memory.h @@ -15,6 +15,8 @@ #include <mutex> #include <vector> +#include "llvm/ADT/SmallSet.h" + namespace lldb_private { // A class to track memory that was read from a live process between // runs. @@ -43,6 +45,15 @@ void AddL1CacheData(lldb::addr_t addr, const lldb::DataBufferSP &data_buffer_sp); + /// Remember that a memory address cannot be read from. + void AddUnreadableAddress(lldb::addr_t address); + + /// Test if a memory address was already recorded as unreadable. + bool IsAddressUnreadable(lldb::addr_t address); + + /// Clear unreadable address cache. + void FlushUnreadableAddresses(); + protected: typedef std::map<lldb::addr_t, lldb::DataBufferSP> BlockMap; typedef RangeVector<lldb::addr_t, lldb::addr_t, 4> InvalidRanges; @@ -57,6 +68,7 @@ InvalidRanges m_invalid_ranges; Process &m_process; uint32_t m_L2_cache_line_byte_size; + llvm::SmallSet<lldb::addr_t, 8> m_unreadable_addresses; private: MemoryCache(const MemoryCache &) = delete;
_______________________________________________ lldb-commits mailing list lldb-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits