Re: [PERFORM] 121+ million record table perf problems

2007-05-21 Thread Vivek Khera
On May 18, 2007, at 2:30 PM, Andrew Sullivan wrote: Note also that your approach of updating all 121 million records in one statement is approximately the worst way to do this in Postgres, because it creates 121 million dead tuples on your table. (You've created some number of those by killing

Re: [PERFORM] 121+ million record table perf problems

2007-05-18 Thread Greg Smith
On Fri, 18 May 2007, [EMAIL PROTECTED] wrote: shared_buffers = 24MB work_mem = 256MB maintenance_work_mem = 512MB You should take a minute to follow the suggestions at http://www.westnet.com/~gsmith/content/postgresql/pg-5minute.htm and set dramatically higher values for shared_buffers and e

Re: [PERFORM] 121+ million record table perf problems

2007-05-18 Thread Alvaro Herrera
Craig James wrote: > Better yet, if you can stand a short down time, you can drop indexes on > that column, truncate, then do 121 million inserts, and finally > reindex. That will be MUCH faster. Or you can do a CLUSTER, which does all the same things automatically. -- Alvaro Herrera

Re: [PERFORM] 121+ million record table perf problems

2007-05-18 Thread Craig James
I've got a table with ~121 million records in it. Select count on it currently takes ~45 minutes, and an update to the table to set a value on one of the columns I finally killed after it ran 17 hours and had still not completed. Queries into the table are butt slow, and The update query

Re: [PERFORM] 121+ million record table perf problems

2007-05-18 Thread Tom Lane
Andrew Sullivan <[EMAIL PROTECTED]> writes: > All of that said, 17 hours seems kinda long. I imagine he's done a bunch of those full-table UPDATEs without vacuuming, and now has approximately a gazillion dead tuples bloating the table. regards, tom lane -

Re: [PERFORM] 121+ million record table perf problems

2007-05-18 Thread Alan Hodgson
On Friday 18 May 2007 11:51, "Joshua D. Drake" <[EMAIL PROTECTED]> wrote: > > The update query that started this all I had to kill after 17hours. It > > should have updated all 121+ million records. That brought my select > > count down to 19 minutes, but still a far cry from acceptable. You're

Re: [PERFORM] 121+ million record table perf problems

2007-05-18 Thread Brian Hurt
[EMAIL PROTECTED] wrote: I need some help on recommendations to solve a perf problem. I've got a table with ~121 million records in it. Select count on it currently takes ~45 minutes, and an update to the table to set a value on one of the columns I finally killed after it ran 17 hours and h

Re: [PERFORM] 121+ million record table perf problems

2007-05-18 Thread Joshua D. Drake
[EMAIL PROTECTED] wrote: I need some help on recommendations to solve a perf problem. I've got a table with ~121 million records in it. Select count on it currently takes ~45 minutes, and an update to the table to set a value on one of the columns I finally killed after it ran 17 hours and ha

Re: [PERFORM] 121+ million record table perf problems

2007-05-18 Thread Andrew Sullivan
On Fri, May 18, 2007 at 12:43:40PM -0500, [EMAIL PROTECTED] wrote: > I've got a table with ~121 million records in it. Select count on it > currently takes ~45 minutes, and an update to the table to set a value on > one of the columns I finally killed after it ran 17 hours and had still > not c

[PERFORM] 121+ million record table perf problems

2007-05-18 Thread cyber-postgres
I need some help on recommendations to solve a perf problem. I've got a table with ~121 million records in it. Select count on it currently takes ~45 minutes, and an update to the table to set a value on one of the columns I finally killed after it ran 17 hours and had still not completed. Q