On May 18, 2007, at 2:30 PM, Andrew Sullivan wrote:
Note also that your approach of updating all 121 million records in
one statement is approximately the worst way to do this in Postgres,
because it creates 121 million dead tuples on your table. (You've
created some number of those by killing
On Fri, 18 May 2007, [EMAIL PROTECTED] wrote:
shared_buffers = 24MB
work_mem = 256MB
maintenance_work_mem = 512MB
You should take a minute to follow the suggestions at
http://www.westnet.com/~gsmith/content/postgresql/pg-5minute.htm and set
dramatically higher values for shared_buffers and e
Craig James wrote:
> Better yet, if you can stand a short down time, you can drop indexes on
> that column, truncate, then do 121 million inserts, and finally
> reindex. That will be MUCH faster.
Or you can do a CLUSTER, which does all the same things automatically.
--
Alvaro Herrera
I've got a table with ~121 million records in it. Select count on it
currently takes ~45 minutes, and an update to the table to set a value
on one of the columns I finally killed after it ran 17 hours and had
still not completed. Queries into the table are butt slow, and
The update query
Andrew Sullivan <[EMAIL PROTECTED]> writes:
> All of that said, 17 hours seems kinda long.
I imagine he's done a bunch of those full-table UPDATEs without
vacuuming, and now has approximately a gazillion dead tuples bloating
the table.
regards, tom lane
-
On Friday 18 May 2007 11:51, "Joshua D. Drake" <[EMAIL PROTECTED]> wrote:
> > The update query that started this all I had to kill after 17hours. It
> > should have updated all 121+ million records. That brought my select
> > count down to 19 minutes, but still a far cry from acceptable.
You're
[EMAIL PROTECTED] wrote:
I need some help on recommendations to solve a perf problem.
I've got a table with ~121 million records in it. Select count on it
currently takes ~45 minutes, and an update to the table to set a value
on one of the columns I finally killed after it ran 17 hours and h
[EMAIL PROTECTED] wrote:
I need some help on recommendations to solve a perf problem.
I've got a table with ~121 million records in it. Select count on it
currently takes ~45 minutes, and an update to the table to set a value
on one of the columns I finally killed after it ran 17 hours and ha
On Fri, May 18, 2007 at 12:43:40PM -0500, [EMAIL PROTECTED] wrote:
> I've got a table with ~121 million records in it. Select count on it
> currently takes ~45 minutes, and an update to the table to set a value on
> one of the columns I finally killed after it ran 17 hours and had still
> not c
I need some help on recommendations to solve a perf problem.
I've got a table with ~121 million records in it. Select count on it
currently takes ~45 minutes, and an update to the table to set a value on
one of the columns I finally killed after it ran 17 hours and had still
not completed. Q
10 matches
Mail list logo