On Fri, Feb 25, 2011 at 14:26, Alvaro Herrera <alvhe...@commandprompt.com>wrote:

> Excerpts from Rod Taylor's message of vie feb 25 14:03:58 -0300 2011:
>
> > How practical would it be for analyze to keep a record of response times
> for
> > given sections of a table as it randomly accesses them and generate some
> > kind of a map for expected response times for the pieces of data it is
> > analysing?
>
> I think what you want is random_page_cost that can be tailored per
> tablespace.
>
>
Yes, that can certainly help but does nothing to help with finding typical
hot-spots or cached sections of the table and sending that information to
the planner.

Between Analyze random sampling and perhaps some metric during actual IO of
random of queries we should be able to determine and record which pieces of
data tend to be hot/in cache, or readily available and what data tends not
to be.


If the planner knew that the value "1" tends to have a much lower cost to
fetch than any other value in the table (it is cached or otherwise readily
available), it can choose a plan better suited toward that.

Reply via email to