On Thu, Mar 20, 2014 at 10:45 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Robert Haas <robertmh...@gmail.com> writes:
>> So you might think that the problem here is that we're assuming
>> uniform density.  Let's say there are a million rows in the table, and
>> there are 100 that match our criteria, so the first one is going to
>> happen 1/10,000'th of the way through the table.  Thus we set SC =
>> 0.0001 * TC, and that turns out to be an underestimate if the
>> distribution isn't as favorable as we're hoping.  However, that is NOT
>> what we are doing.  What we are doing is setting SC = 0.  I mean, not
>> quite 0, but yeah, effectively 0. Essentially we're assuming that no
>> matter how selective the filter condition may be, we assume that it
>> will match *the very first row*.
>
> I think this is wrong.  Yeah, the SC may be 0 or near it, but the time to
> fetch the first tuple is estimated as SC + (TC-SC)/N.

Hmm, you're right, and experimentation confirms that the total cost of
the limit comes out to about TC/selectivity.  So scratch that theory.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to