On Tue, May 5, 2009 at 2:31 AM, Vlad Arkhipov wrote:
> Is there a way to limit I/O bandwidth/CPU usage of a certain backend? It
> seems that ionice/renice makes no sense because of bgwriter/WAL writer
> processes are not a part of a backend. I have a periodic query (every
> hour) that make huge I/
On Wed, Mar 25, 2009 at 10:28 PM, Tom Lane wrote:
> Bryan Murphy writes:
>> What I did was change seq_page_cost back to 1.0 and then changed
>> random_page_cost to 0.5
>
> [ squint... ] It makes no physical sense for random_page_cost to be
> less than seq_page_cost.
On Wed, Mar 25, 2009 at 9:15 PM, Tom Lane wrote:
> I think what you should be doing is messing with the cost parameters
> ... and not in the direction you tried before. I gather from
> effective_cache_size = 12GB
> that you have plenty of RAM on this machine. If the tables involved
> are
On Wed, Mar 25, 2009 at 4:55 PM, Josh Berkus wrote:
> Oh, I see. It thinks that it'll need to pull 260,000 redundant rows in
> order to get 1800 unique ones. Only it's wrong; you're only pulling about
> 4000.
>
> Try increasing some stats still further: itemexperiencelog.visitorid and
> visitors
On Wed, Mar 25, 2009 at 8:40 AM, Robert Haas wrote:
> On Tue, Mar 24, 2009 at 11:43 PM, Bryan Murphy wrote:
>> Looking through our configuration one more time, I see that at some
>> point I set random_page_cost to 2.0, but I don't see any other changes
>> to query p
On Tue, Mar 24, 2009 at 9:30 PM, Josh Berkus wrote:
> For some reason, your first post didn't make it to the list, which is why
> nobody responded.
Weird... I've been having problems with gmail and google reader all week.
>>> I've got a query on our production system that isn't choosing a good
>
On Tue, Mar 24, 2009 at 10:04 PM, marcin mank wrote:
> There is one thing I don`t understand:
>
> -> Nested Loop (cost=0.00..180564.28 rows=1806
> width=37) (actual time=0.192..60.214 rows=3174 loops=1)
> -> Index Scan using visitors_userid_index2 on
> visitors v
r 23, 2009 at 2:03 PM, Bryan Murphy wrote:
> Hey Guys,
>
> I've got a query on our production system that isn't choosing a good
> plan. I can't see why it's choosing to do a sequential scan on the
> ItemExperienceLog table. That table is about 800mb and has abo
r 23, 2009 at 2:03 PM, Bryan Murphy wrote:
> Hey Guys,
>
> I've got a query on our production system that isn't choosing a good
> plan. I can't see why it's choosing to do a sequential scan on the
> ItemExperienceLog table. That table is about 800mb and has abo
We use pgfouine (http://pgfouine.projects.postgresql.org/).
I currently have postgres log every query that takes longer than
100ms, roll the log files every 24 hours, and run pgfouine nightly. I
check it every couple of mornings and this gives me a pretty good
picture of who misbehaved over the l
We are currently running our database against on SAN share. It looks like this:
2 x RAID 10 (4 disk SATA 7200 each)
Raid Group 0 contains the tables + indexes
Raid Group 1 contains the log files + backups (pg_dump)
Our database server connects to the san via iSCSI over Gig/E using
jumbo frames.
we currently have logging enabled for all queries over 100ms, and keep
the last 24 hours of logs before we rotate them. I've found this tool
very helpful in diagnosing new performance problems that crop up:
http://pgfouine.projects.postgresql.org/
Bryan
On 8/8/07, Steinar H. Gunderson <[EMAIL P
On 8/2/07, Alan Hodgson <[EMAIL PROTECTED]> wrote:
> On Thursday 02 August 2007 09:02, "Bryan Murphy" <[EMAIL PROTECTED]>
> wrote:
> > My question: Is there a way I can decrease the priority of a specific
> > query, or determine the PID of the process it is runni
We have a complicated stored procedure that we run frequently. It
pegs one of our postmaster processes at 100% CPU utilization for a few
hours. This has the unfortunate side effect of causing increased
latency for our other queries. We are currently planning a fix, but
because of the complicated
We store tags on our items like this like this:
Tag.ID INT NOT NULL PRIMARY KEY
Tag.Value TEXT LCASE NOT NULL UNIQUE
Item.ID INT NOT NULL PRIMARY KEY
ItemTagBinding.ItemID INT NOT NULL REFERENCES Item.ID
ItemTagBinding.TagID INT NOT NULL REFERENCES Tag.ID
ItemTagBinding.ItemID + ItemTagBinding.
We have a few tables that we need to pull relatively accurate aggregate
counts from, and we found the performance of SELECT COUNT(*) to be
unacceptable. We solved this by creating triggers on insert and delete to
update counts in a secondary table which we join to when we need the count
informati
No, but I was just informed of that trick earlier and intend to try it
soon. Sometimes, the solution is so simple it's TOO obvious... :)
Bryan
On 6/25/07, Oleg Bartunov <[EMAIL PROTECTED]> wrote:
On Mon, 25 Jun 2007, Bryan Murphy wrote:
> We have a search facility in our data
We have a search facility in our database that uses full text indexing to
search about 300,000 records spread across 2 tables. Nothing fancy there.
The problem is, whenever we restart the database (system crash, lost
connectivity to SAN, upgrade, configuration change, etc.) our data is not
cache
18 matches
Mail list logo