Joseph Myers <jos...@codesourcery.com>:
> On Mon, 23 Jul 2018, Richard Earnshaw (lists) wrote:
> 
> > So traditional git bisect is inherently serial, but we can be more
> > creative here, surely.  A single run halves the search space each time.
> > But three machines working together can split it into 4 each run, 7
> > machines into 8, etc.  You don't even need a precise 2^N - 1 to get a
> > speedup.
> 
> Exactly.  Given an appropriate recipe for testing whether the conversion 
> of history up to a given revision is OK or not, I can run tests in 
> parallel for nine different revisions on nine different machines (each 
> with 128 GB memory) at the same time as easily as running one such test.  

I'll try to work up a script that bundles the test procedure this week.

Before I do that, though, I have an idea that might reduce the test times
a lot.  It hasn't been practical to strip and topo-reduce the repository
before this, but I'm almpst finished porting repocutter to go.

> I think parallelising the bisection process is a better approach than 
> trying to convert only a subset of branches (which I don't think would 
> help with the present problem - though we can always consider killing 
> selected branches with too many mid-branch deletealls, if appropriate) or 
> waiting for a move to Go.

The conversion already kills one old branch because of an unrecoverable
malformation - I checked with the devlist about this. I agree that trying
to cut other branches is unlikely to be helpful while we have
incorrect conversion on trunk.
-- 
                <a href="http://www.catb.org/~esr/";>Eric S. Raymond</a>

My work is funded by the Internet Civil Engineering Institute: https://icei.org
Please visit their site and donate: the civilization you save might be your own.


Reply via email to