Yep, the effective_cache_size was specifically because we have lots of RAM to 
play with, and want to ensure we are caching wherever possible.
The reduced random_page_cost was primarily to allow for the fact we're using 
relatively fast disk (indexes are SSD, table data on SAS drives), though I 
didn't fully appreciate how the combination of these settings can influence the 
preference towards a sequential vs index scan.

I think i'll stop tweaking for now, and see how it performs in the next few 
days.  I feel like I have a much better handle on how the planner is pulling 
everything together. Cheers.

Tim


On 11 Aug 2013, at 01:38, Tom Lane <t...@sss.pgh.pa.us> wrote:

> Tim Kane <tim.k...@gmail.com> writes:
>> I guess the clustering approach managed to work around the need to mess with 
>> the statistics target.  I did previously increase the target to 1000 (from 
>> 100) for that field and had no impact, but this is an aspect of tuning I'm 
>> not so familiar with - I didn't consider pushing it all the way to 11.
> 
> Yeah, I had actually started to write an email recommending that you dial
> down effective_cache_size and increase random_page_cost, before I noticed
> the discrepancy in the merge join cost and realized what was really going
> on.
> 
> The question now is why you had those settings like that before, and
> whether changing them back in the direction of the defaults might not be
> pessimizing the behavior for other queries.  If you have a lot of RAM and
> mostly-cached queries, the previous settings didn't sound unreasonable.
> 
>                       regards, tom lane



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to