Greetings, * Andrew Dunstan (andrew.duns...@2ndquadrant.com) wrote: > On 6/10/20 10:13 AM, Tom Lane wrote: > > Andrew Dunstan <andrew.duns...@2ndquadrant.com> writes: > >> Alternatively, people with access to the database could extract the logs > >> and post-process them using perl or python. That would involve no work > >> on my part :-) But it would not be automated. > > Yeah, we could easily extract per-test-script runtimes, since pg_regress > > started to print those. But ... > > > >> What we do record (in build_status_log) is the time each step took. So > >> any regression test that suddenly blew out should likewise cause a > >> blowout in the time the whole "make check" took. > > I have in the past scraped the latter results and tried to make sense of > > them. They are *mighty* noisy, even when considering just one animal > > that I know to be running on a machine with little else to do. Maybe > > averaging across the whole buildfarm could reduce the noise level, but > > I'm not very hopeful. Per-test-script times would likely be even > > noisier (ISTM anyway, maybe I'm wrong). > > > > The entire reason we've been discussing a separate performance farm > > is the expectation that buildfarm timings will be too noisy to be > > useful to detect any but the most obvious performance effects. > > Yes, but will the performance farm be testing regression timings?
We are not currently envisioning that, no. > Maybe we're going to need several test suites, one of which could be > regression tests. But the regression tests themselves are not really > intended for performance testing. Agree with this- better would be tests which are specifically written to test performance instead. Thanks, Stephen
signature.asc
Description: PGP signature