Re: [PERFORM] postgresql.conf suggestions?

2009-05-20 Thread Greg Smith
On Wed, 20 May 2009, Kobby Dapaah wrote: shared_buffers = 2048MB effective_cache_size = 5400MB You should consider seriously increasing effective_cache_size. You might also double or quadruple shared_buffers from 2GB, but going much higher may not buy you much--most people seem to find dimi

Re: [PERFORM] postgresql.conf suggestions?

2009-05-20 Thread Robert Haas
On Wed, May 20, 2009 at 12:22 PM, Kobby Dapaah wrote: > I just upgraded from a > > 2xIntel Xeon-Harpertown 5450-Quadcore,16 GB,Redhat EL 5.1-64 > To > 2xIntel Xeon-Nehalem 5570-Quadcore,36GB,Redhat EL 5.3-64 > > Any advice on how I'll get the best of this server? > > This is what I currently have:

[PERFORM] postgresql.conf suggestions?

2009-05-20 Thread Kobby Dapaah
I just upgraded from a 2xIntel Xeon-Harpertown 5450-Quadcore,16 GB,Redhat EL 5.1-64 To 2xIntel Xeon-Nehalem 5570-Quadcore,36GB,Redhat EL 5.3-64 Any advice on how I'll get the best of this server? This is what I currently have: max_connections = 100 shared_buffers = 2048MB maintenance_work_mem =

Re: [PERFORM] Any better plan for this query?..

2009-05-20 Thread Simon Riggs
On Wed, 2009-05-20 at 07:17 -0400, Robert Haas wrote: > On Wed, May 20, 2009 at 4:11 AM, Simon Riggs wrote: > > The Hash node is fully executed before we start pulling rows through the > > Hash Join node. So the Hash Join node will know at execution time > > whether or not it will continue to mai

Re: [PERFORM] Any better plan for this query?..

2009-05-20 Thread Robert Haas
On Wed, May 20, 2009 at 4:11 AM, Simon Riggs wrote: > The Hash node is fully executed before we start pulling rows through the > Hash Join node. So the Hash Join node will know at execution time > whether or not it will continue to maintain sorted order. So we put the > Sort node into the plan, th

Re: [PERFORM] Any better plan for this query?..

2009-05-20 Thread Simon Riggs
On Tue, 2009-05-19 at 23:54 -0400, Robert Haas wrote: > I don't think it's a good idea to write off the idea of implementing > this optimization at some point. I see a lot of queries that join one > fairly large table against a whole bunch of little tables, and then > sorting the results by a co