Hi, I've just had some thoughts about the possible usefulness of having the buildfarm record the run-time of each regression test to allow us to have some sort of ability to track the run-time history of each test.
I thought the usefulness might be two-fold: 1. We could quickly identify when someone adds some overly complex test and slows down the regression tests too much. 2. We might get some faster insight into performance regressions. I can think of about 3 reasons that a test might slow down. a) Someone adds some new tests within the test file. b) Actual performance regression in Postgres c) Animal busy with other work. We likely could do a semi-decent job of telling a) and b) apart by just recording the latest commit that changed the .sql file for the test. We could also likely see when c) is at play by the results returning back to normal again a few runs after some spike. We'd only want to pay attention to consistent slowdowns. Perhaps there would be too much variability with the parallel tests, but maybe we could just record it for the serial tests in make check-world. I only thought of this after reading [1]. If we went ahead with that, as of now, it feels like someone could quite easily break that optimisation and nobody would notice for a long time. I admit to not having looked at the buildfarm code to determine how practical such a change would be. I've assumed there is a central database that stores all the results. David [1] https://www.postgresql.org/message-id/CAJ3gD9eEXJ2CHMSiOehvpTZu3Ap2GMi5jaXhoZuW=3xjlmz...@mail.gmail.com