On 09/11/2017 05:32 PM, Robert Haas wrote: > On Sun, Sep 10, 2017 at 9:39 PM, Peter Geoghegan <p...@bowt.ie> wrote: >> To be clear, you'll still need to set replacement_sort_tuples high >> when testing RS, to make sure that we really use it for at least the >> first run when we're expected to. (There is no easy way to have >> testing mechanically verify that we really do only have one run in the >> end with RS, but I assume that such paranoia is unneeded.) > > I seem to recall that raising replacement_sort_tuples makes > replacement selection look worse in some cases -- the optimization is > more likely to apply, sure, but the heap is also bigger, which hurts. >
The question is what is the optimal replacement_sort_tuples value? I assume it's the number of tuples that effectively uses CPU caches, at least that's what our docs say. So I think you're right it to 1B rows may break this assumption, and make it perform worse. But perhaps the fact that we're testing with multiple work_mem values, and with smaller data sets (100k or 1M rows) makes this a non-issue? regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers