On Wed, Jun 16, 2021 at 03:13:30PM -0400, Andrew Dunstan wrote: > On 6/16/21 2:59 PM, Fabien COELHO wrote: > > The key feedback for me is the usual one: what is not tested does not > > work. Wow:-) > > Agreed. > > > I'm unhappy because I already added tap tests for time-sensitive > > features (-T and others, maybe logging aggregates, cannot remember), > > which have been removed because they could fail under some > > circonstances (eg very very very very slow hosts), or required some > > special handling (a few lines of code) in pgbench, and the net result > > of this is there is not a single test in place for some features:-( > > I'm not familiar with exactly what happened in this case, but tests need > to be resilient over a wide range of performance characteristics. One > way around this issue might be to have a way of detecting that it's on a > slow platform and if so either skipping tests (Test::More provides > plenty of support for this) or expecting different results.
Detection would need the host to be consistently slow, like running under Valgrind or a 20-year-old CPU. We also test on systems having highly-variable performance due to other processes competing for the same hardware. I'd perhaps add a "./configure --enable-realtime-tests" option that enables affected tests. Testers should use the option whenever the execution environment has sufficient reserved CPU.