On Fri, Nov 02, 2018 at 10:38:45AM -0400, Robert Haas wrote:
> I think it's in evidence, in the form of several messages mentioning a
> flag called try_every_block.
> 
> Just checking the last page of the table doesn't sound like a good
> idea to me.  I think that will just lead to a lot of stupid bloat.  It
> seems likely that checking every page of the table is fine for npages
> <= 3, and that would still be win in a very significant number of
> cases, since lots of instances have many empty or tiny tables.  I was
> merely reacting to the suggestion that the approach should be used for
> npages <= 32; that threshold sounds way too high.

It seems to me that it would be costly for schemas which have one core
table with a couple of records used in many joins with other queries.
Imagine for example a core table like that:
CREATE TABLE us_states (id serial, initials varchar(2));
INSERT INTO us_states VALUES (DEFAULT, 'CA');

If there is a workload where those initials need to be fetched a lot,
this patch could cause a loss.  It looks hard to me to put a straight
number on when not having the FSM is better than having it because that
could be environment-dependent, so there is an argument for making the
default very low, still configurable?
--
Michael

Attachment: signature.asc
Description: PGP signature

Reply via email to