Hi, Right now check-world, a sensible thing to run before commits that aren't very narrow, takes a long time. To the point it seems to impact development velocity enough that it's more sensible to just skip it, and rely on the buildfarm.
That's not a great solution. A large amount of the time is actually spent doing completely redundant initdb, cluster start/stop work, and running tests serially that could be run in parallel. A single check-world on my machine takes over 20min. That's just not realistic to run without hurting development pace. We can avoid a lot of redundant work (skip redundant initdb & cluster start/stop), and we can quite easily parallelize others. The problem is that doing so isn't something entirely trivially scriptable, e.g. make installcheck-world doesn't run all tests, and it doesn't do so in parallel. Scripting it locally also has the issue that it's very easy to not notice new stuff being added by others. As an example of the speedups, here's the comparison for contrib: make -C contrib: 2m21.056s make -C contrib installcheck 0m30.672s make -C contrib -j16 -s -Otarget installcheck USE_MODULE_DB=1 0m10.418s that's not an entirely fair comparison however, because test_decoding doesn't to installcheck, but the general principle holds. This is obviously a large difference. A lot of the slow tap tests could be run serially, recoverying a lot more time. There's also some tests simply taking way too long, e.g. the pg_dump tests just do largely redundant tests for 30s. I'm not quite sure what the best way to attack this is, but I think we need to do something. Greetings, Andres Freund -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers