https://github.com/bulbazord created https://github.com/llvm/llvm-project/pull/190833
Reapplication notes: After reviewing the test failures that caused the original reverts, I'm not convinced that this change is related. None of the test failures failed while timing out waiting for a test. Original Summary: I've been tracking sporadic timeouts waiting for a file to appear on macOS buildbots (and occasionally local development environments). I believe I've tracked it down to a regression in process launch performance in macOS. What I noticed is that running multiple test suites simultaneously almost always triggered these failures and that the tests were always waiting on files created by the inferior. Increasing this timeout no longer triggers the failures on my loaded machine locally. This timeout moves from about 16 seconds of total wait time to about 127 seconds of total wait time. This may feel a bit extreme, but this is a performance issue. While I was here, I cleaned up logging code I was using to investigate the test failures. rdar://172122213 >From 75d71afc84a42860d998349e199d50432da9cdf0 Mon Sep 17 00:00:00 2001 From: Alex Langford <[email protected]> Date: Tue, 31 Mar 2026 12:36:51 -0700 Subject: [PATCH] Reapply "[lldb] Increase timeout on lldbutil.wait_for_file_on_target" Reapplication notes: After reviewing the test failures that caused the original reverts, I'm not convinced that this change is inherently related. None of the test failures failed while timing out waiting for a test. Original Summary: I've been tracking sporadic timeouts waiting for a file to appear on macOS buildbots (and occasionally local development environments). I believe I've tracked it down to a regression in process launch performance in macOS. What I noticed is that running multiple test suites simultaneously almost always triggered these failures and that the tests were always waiting on files created by the inferior. Increasing this timeout no longer triggers the failures on my loaded machine locally. This timeout moves from about 16 seconds of total wait time to about 127 seconds of total wait time. This may feel a bit extreme, but this is a performance issue. While I was here, I cleaned up logging code I was using to investigate the test failures. rdar://172122213 --- .../Python/lldbsuite/test/lldbutil.py | 19 +++++++------------ 1 file changed, 7 insertions(+), 12 deletions(-) diff --git a/lldb/packages/Python/lldbsuite/test/lldbutil.py b/lldb/packages/Python/lldbsuite/test/lldbutil.py index 7f7cdde561702..4b59dae7c9ab5 100644 --- a/lldb/packages/Python/lldbsuite/test/lldbutil.py +++ b/lldb/packages/Python/lldbsuite/test/lldbutil.py @@ -1685,23 +1685,18 @@ def read_file_from_process_wd(test, name): return read_file_on_target(test, path) -def wait_for_file_on_target(testcase, file_path, max_attempts=6): - for i in range(max_attempts): +def wait_for_file_on_target(testcase, file_path): + import time + + MAX_ATTEMPTS = 60 + timeout_seconds = 20 if "ASAN_OPTIONS" in os.environ else 2 + for i in range(MAX_ATTEMPTS): command = f"ls {file_path}" err, retcode, msg = testcase.run_platform_command(command) - if testcase.TraceOn(): - testcase.trace(f"Ran command: {command}") - testcase.trace(f"Retcode: {retcode}") - testcase.trace(f"Output: {msg}") - testcase.trace(f"Error: {err.description}") - if err.Success() and retcode == 0: break - if i < max_attempts: - # Exponential backoff! - import time - time.sleep(pow(2, i) * 0.25) + time.sleep(timeout_seconds) else: testcase.fail( "File %s not found even after %d attempts." % (file_path, max_attempts) _______________________________________________ lldb-commits mailing list [email protected] https://lists.llvm.org/cgi-bin/mailman/listinfo/lldb-commits
