Re: [PERFORM] Limit I/O bandwidth of a certain backend

2009-05-05 Thread Bryan Murphy
On Tue, May 5, 2009 at 2:31 AM, Vlad Arkhipov wrote: > Is there a way to limit I/O bandwidth/CPU usage of a certain backend? It > seems that ionice/renice makes no sense because of bgwriter/WAL writer > processes are not a part of a backend. I have a periodic query (every > hour) that make huge I/

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-25 Thread Bryan Murphy
On Wed, Mar 25, 2009 at 10:28 PM, Tom Lane wrote: > Bryan Murphy writes: >> What I did was change seq_page_cost back to 1.0 and then changed >> random_page_cost to 0.5 > > [ squint... ]  It makes no physical sense for random_page_cost to be > less than seq_page_cost.  

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-25 Thread Bryan Murphy
On Wed, Mar 25, 2009 at 9:15 PM, Tom Lane wrote: > I think what you should be doing is messing with the cost parameters > ... and not in the direction you tried before.  I gather from >        effective_cache_size = 12GB > that you have plenty of RAM on this machine.  If the tables involved > are

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-25 Thread Bryan Murphy
On Wed, Mar 25, 2009 at 4:55 PM, Josh Berkus wrote: > Oh, I see.  It thinks that it'll need to pull 260,000 redundant rows in > order to get 1800 unique ones.  Only it's wrong; you're only pulling about > 4000. > > Try increasing some stats still further: itemexperiencelog.visitorid and > visitors

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-25 Thread Bryan Murphy
On Wed, Mar 25, 2009 at 8:40 AM, Robert Haas wrote: > On Tue, Mar 24, 2009 at 11:43 PM, Bryan Murphy wrote: >> Looking through our configuration one more time, I see that at some >> point I set random_page_cost to 2.0, but I don't see any other changes >> to query p

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-24 Thread Bryan Murphy
On Tue, Mar 24, 2009 at 9:30 PM, Josh Berkus wrote: > For some reason, your first post didn't make it to the list, which is why > nobody responded. Weird... I've been having problems with gmail and google reader all week. >>> I've got a query on our production system that isn't choosing a good >

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-24 Thread Bryan Murphy
On Tue, Mar 24, 2009 at 10:04 PM, marcin mank wrote: > There is one thing I don`t understand: > >              ->  Nested Loop  (cost=0.00..180564.28 rows=1806 > width=37) (actual time=0.192..60.214 rows=3174 loops=1) >                    ->  Index Scan using visitors_userid_index2 on > visitors v

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-24 Thread Bryan Murphy
r 23, 2009 at 2:03 PM, Bryan Murphy wrote: > Hey Guys, > > I've got a query on our production system that isn't choosing a good > plan.  I can't see why it's choosing to do a sequential scan on the > ItemExperienceLog table.  That table is about 800mb and has abo

Re: [PERFORM] Help Me Understand Why I'm Getting a Bad Query Plan

2009-03-24 Thread Bryan Murphy
r 23, 2009 at 2:03 PM, Bryan Murphy wrote: > Hey Guys, > > I've got a query on our production system that isn't choosing a good > plan.  I can't see why it's choosing to do a sequential scan on the > ItemExperienceLog table.  That table is about 800mb and has abo

Re: [PERFORM] query io stats and finding a slow query

2007-09-25 Thread Bryan Murphy
We use pgfouine (http://pgfouine.projects.postgresql.org/). I currently have postgres log every query that takes longer than 100ms, roll the log files every 24 hours, and run pgfouine nightly. I check it every couple of mornings and this gives me a pretty good picture of who misbehaved over the l

Re: [PERFORM] SAN vs Internal Disks

2007-09-07 Thread Bryan Murphy
We are currently running our database against on SAN share. It looks like this: 2 x RAID 10 (4 disk SATA 7200 each) Raid Group 0 contains the tables + indexes Raid Group 1 contains the log files + backups (pg_dump) Our database server connects to the san via iSCSI over Gig/E using jumbo frames.

Re: [PERFORM] How to ENABLE SQL capturing???

2007-08-08 Thread Bryan Murphy
we currently have logging enabled for all queries over 100ms, and keep the last 24 hours of logs before we rotate them. I've found this tool very helpful in diagnosing new performance problems that crop up: http://pgfouine.projects.postgresql.org/ Bryan On 8/8/07, Steinar H. Gunderson <[EMAIL P

Re: [PERFORM] cpu throttling

2007-08-02 Thread Bryan Murphy
On 8/2/07, Alan Hodgson <[EMAIL PROTECTED]> wrote: > On Thursday 02 August 2007 09:02, "Bryan Murphy" <[EMAIL PROTECTED]> > wrote: > > My question: Is there a way I can decrease the priority of a specific > > query, or determine the PID of the process it is runni

[PERFORM] cpu throttling

2007-08-02 Thread Bryan Murphy
We have a complicated stored procedure that we run frequently. It pegs one of our postmaster processes at 100% CPU utilization for a few hours. This has the unfortunate side effect of causing increased latency for our other queries. We are currently planning a fix, but because of the complicated

Re: [PERFORM] Optmal tags design?

2007-07-18 Thread Bryan Murphy
We store tags on our items like this like this: Tag.ID INT NOT NULL PRIMARY KEY Tag.Value TEXT LCASE NOT NULL UNIQUE Item.ID INT NOT NULL PRIMARY KEY ItemTagBinding.ItemID INT NOT NULL REFERENCES Item.ID ItemTagBinding.TagID INT NOT NULL REFERENCES Tag.ID ItemTagBinding.ItemID + ItemTagBinding.

Re: [PERFORM] Two questions.. shared_buffers and long reader issue

2007-07-11 Thread Bryan Murphy
We have a few tables that we need to pull relatively accurate aggregate counts from, and we found the performance of SELECT COUNT(*) to be unacceptable. We solved this by creating triggers on insert and delete to update counts in a secondary table which we join to when we need the count informati

Re: [PERFORM] startup caching suggestions

2007-06-25 Thread Bryan Murphy
No, but I was just informed of that trick earlier and intend to try it soon. Sometimes, the solution is so simple it's TOO obvious... :) Bryan On 6/25/07, Oleg Bartunov <[EMAIL PROTECTED]> wrote: On Mon, 25 Jun 2007, Bryan Murphy wrote: > We have a search facility in our data

[PERFORM] startup caching suggestions

2007-06-25 Thread Bryan Murphy
We have a search facility in our database that uses full text indexing to search about 300,000 records spread across 2 tables. Nothing fancy there. The problem is, whenever we restart the database (system crash, lost connectivity to SAN, upgrade, configuration change, etc.) our data is not cache