On Mon, 23 Jul 2018, Richard Earnshaw (lists) wrote: > So traditional git bisect is inherently serial, but we can be more > creative here, surely. A single run halves the search space each time. > But three machines working together can split it into 4 each run, 7 > machines into 8, etc. You don't even need a precise 2^N - 1 to get a > speedup.
Exactly. Given an appropriate recipe for testing whether the conversion of history up to a given revision is OK or not, I can run tests in parallel for nine different revisions on nine different machines (each with 128 GB memory) at the same time as easily as running one such test. (And the conversions for shorter initial segments of history should be faster, so if the bug turns out to relate to conversion of cvs2svn scar tissue, you don't even need to wait for the conversions of longer portions of history to complete before you've narrowed down where the bug is and can start another such bisection.) I think parallelising the bisection process is a better approach than trying to convert only a subset of branches (which I don't think would help with the present problem - though we can always consider killing selected branches with too many mid-branch deletealls, if appropriate) or waiting for a move to Go. -- Joseph S. Myers jos...@codesourcery.com