Re: [PERFORM] Raid 10 chunksize

2009-03-25 Thread Mark Kirkwood
I wrote: Scott Marlowe wrote: Can you try changing the chunksize on the test box you're testing on to see if that helps? Yes - or I am hoping to anyway (part of posting here was to collect some outside validation for the idea). Thanks for your input! Rebuilt with 64K chunksize: tra

Re: [PERFORM] Raid 10 chunksize

2009-03-25 Thread Mark Kirkwood
Stef Telford wrote: Hello Mark, Okay, so, take all of this with a pinch of salt, but, I have the same config (pretty much) as you, with checkpoint_Segments raised to 192. The 'test' database server is Q8300, 8GB ram, 2 x 7200rpm SATA into motherboard which I then lvm stripped together; lvcre

Re: [PERFORM] Raid 10 chunksize

2009-03-25 Thread Mark Kirkwood
Greg Smith wrote: On Wed, 25 Mar 2009, Mark Kirkwood wrote: I'm thinking that the raid chunksize may well be the issue. Why? I'm not saying you're wrong, I just don't see why that parameter jumped out as a likely cause here. See my other post, however I agree - it wasn't clear whether s

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-25 Thread Bryan Murphy
On Wed, Mar 25, 2009 at 10:28 PM, Tom Lane wrote: > Bryan Murphy writes: >> What I did was change seq_page_cost back to 1.0 and then changed >> random_page_cost to 0.5 > > [ squint... ]  It makes no physical sense for random_page_cost to be > less than seq_page_cost.  Please set them the same. >

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-25 Thread Tom Lane
Bryan Murphy writes: > What I did was change seq_page_cost back to 1.0 and then changed > random_page_cost to 0.5 [ squint... ] It makes no physical sense for random_page_cost to be less than seq_page_cost. Please set them the same. regards, tom lane -- Sent via pgsql

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-25 Thread Bryan Murphy
On Wed, Mar 25, 2009 at 9:15 PM, Tom Lane wrote: > I think what you should be doing is messing with the cost parameters > ... and not in the direction you tried before.  I gather from >        effective_cache_size = 12GB > that you have plenty of RAM on this machine.  If the tables involved > are

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-25 Thread Tom Lane
Bryan Murphy writes: > I tried that already, but I decided to try again in case I messed up > something last time. Here's what I ran. As you can see, it still > chooses to do a sequential scan. Am I changing the stats for those > columns correctly? I think what you should be doing is messing w

Re: [PERFORM] Raid 10 chunksize

2009-03-25 Thread Stef Telford
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Mark Kirkwood wrote: > I'm trying to pin down some performance issues with a machine where > I work, we are seeing (read only) query response times blow out by > an order of magnitude or more at busy times. Initially we blamed > autovacuum, but after

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-25 Thread Bryan Murphy
On Wed, Mar 25, 2009 at 4:55 PM, Josh Berkus wrote: > Oh, I see.  It thinks that it'll need to pull 260,000 redundant rows in > order to get 1800 unique ones.  Only it's wrong; you're only pulling about > 4000. > > Try increasing some stats still further: itemexperiencelog.visitorid and > visitors

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-25 Thread Josh Berkus
Bryan, One thing to keep in mind, due to a lapse in our judgement at the time, this itemexperiencelog table serves as both a current state table, and a log table. Therefore, it potentially has multiple redundant entries, but we typically only look at the most recent entry to figure out the stat

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-25 Thread Bryan Murphy
On Wed, Mar 25, 2009 at 8:40 AM, Robert Haas wrote: > On Tue, Mar 24, 2009 at 11:43 PM, Bryan Murphy wrote: >> Looking through our configuration one more time, I see that at some >> point I set random_page_cost to 2.0, but I don't see any other changes >> to query planner settings from their defa

Re: [PERFORM] Raid 10 chunksize

2009-03-25 Thread Scott Carey
On 3/25/09 1:07 AM, "Greg Smith" wrote: > On Wed, 25 Mar 2009, Mark Kirkwood wrote: > >> I'm thinking that the raid chunksize may well be the issue. > > Why? I'm not saying you're wrong, I just don't see why that parameter > jumped out as a likely cause here. > If postgres is random reading

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-25 Thread Robert Haas
On Tue, Mar 24, 2009 at 11:43 PM, Bryan Murphy wrote: > Looking through our configuration one more time, I see that at some > point I set random_page_cost to 2.0, but I don't see any other changes > to query planner settings from their default values. You don't by any chance have enable_ set to "

Re: [PERFORM] Raid 10 chunksize

2009-03-25 Thread Jerry Champlin
It sounds to me like you need to tune everything you can related to postgresql, but it will unlikely be enough as your load continues to increase. You might want to look into moving some of the read activity off of the database. Depending on you application, memcached or ehcache could help.

Re: [PERFORM] Raid 10 chunksize

2009-03-25 Thread Greg Smith
On Wed, 25 Mar 2009, Mark Kirkwood wrote: I'm thinking that the raid chunksize may well be the issue. Why? I'm not saying you're wrong, I just don't see why that parameter jumped out as a likely cause here. Sun 4140 2x quad-core opteron 2356 16G RAM, 6x 15K 140G SAS That server doesn't