On 21/12/15 13:53, Tomas Vondra wrote:
On 12/21/2015 12:03 PM, Heikki Linnakangas wrote:
On 17/12/15 19:07, Robert Haas wrote:
If it works well empirically, does it really matter that it's
arbitrary? I mean, the entire planner is full of fairly arbitrary
assumptions about which things to consider in the cost model and
which to ignore. The proof that we have made good decisions there
is in the query plans it generates. (The proof that we have made
bad decisions in some cases in the query plans, too.)

Agreed.

What if it only seems to work well because it was tested on cases it was
designed for? What about the workloads that behave differently?

Whenever we do changes to costing and query planning, we carefully
consider counter-examples and cases where it might fail. I see nothing
like that in this thread - all I see is a bunch of pgbench tests, which
seems rather insufficient to me.

Agreed on that too.

I'm ready to spend some time on this, assuming we can agree on what
tests to run. Can we come up with realistic workloads where we expect
the patch might actually work poorly?

I think the worst case scenario would be the case where there is no FPW-related WAL burst at all, and checkpoints are always triggered by max_wal_size rather than checkpoint_timeout. In that scenario, the compensation formula will cause the checkpoint to be too lazy in the beginning, and it will have to catch up more aggressively towards the end of the checkpoint cycle.

One such scenario might be to do only COPYs into a table with no indexes. Or hack pgbench to do concentrate all the updates on only a few very rows. There will be a FPW on those few pages initially, but the spike will be much shorter. Or turn full_page_writes=off, and hack the patch to do compensation even when fullpage_writes=off, and then just run pgbench.

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to