On Wed, Oct 6, 2021 at 1:57 AM Andres Freund <and...@anarazel.de> wrote: > After a recent migration of the skink and a few other animals (sorry for the > false reports on BF, I forgot to adjust a path), I looked at the time it takes > to complete a valgrind run: > > 9.6: Consumed 4h 53min 18.518s CPU time > 10: Consumed 5h 32min 50.839s CPU time > 11: Consumed 7h 7min 17.455s CPU time > 14: still going at 11h 51min 57.951s > HEAD: 14h 32min 29.571s CPU time > > I changed it so that HEAD with be built in parallel separately from the other > branches, so that HEAD gets results within a useful timeframe. But even with > that, the test times are increasing at a rate we're not going to be able to > keep up.
Is the problem here that we're adding a lot of new new test cases? Or is the problem that valgrind runs are getting slower for the same number of test cases? If it's taking longer because we have more test cases, I'm honestly not sure that's really something we should try to fix. I mean, I'm sure we have some bad test cases here and there, but overall I think we still have too little test coverage, not too much. The recent discovery that recovery_end_command had zero test coverage is one fine example of that. But if we've done something that increases the relative cost of valgrind, maybe we can fix that in a centralized way. -- Robert Haas EDB: http://www.enterprisedb.com