Re: [PERFORM] Deleting millions of rows

2009-02-02 Thread Robert Haas
> It's the pending trigger list. He's got two trigger events per row, > which at 40 bytes apiece would approach 4GB of memory. Apparently > it's a 32-bit build of Postgres, so he's running out of process address > space. > > There's a TODO item to spill that list to disk when it gets too large, >

Re: [PERFORM] Deleting millions of rows

2009-02-02 Thread Scott Marlowe
On Mon, Feb 2, 2009 at 3:01 PM, Brian Cox wrote: > In production, the table on which I ran DELETE FROM grows constantly with > old data removed in bunches periodically (say up to a few 100,000s of rows > [out of several millions] in a bunch). I'm assuming that auto-vacuum/analyze > will allow Pos

Re: [PERFORM] Deleting millions of rows

2009-02-02 Thread Brian Cox
Tom Lane [...@sss.pgh.pa.us] wrote: It's the pending trigger list. He's got two trigger events per row, which at 40 bytes apiece would approach 4GB of memory. Apparently it's a 32-bit build of Postgres, so he's running out of process address space. Yes, this is a 32 bit Postgres running on a 32

Re: [PERFORM] Deleting millions of rows

2009-02-02 Thread Tom Lane
Robert Haas writes: > On Mon, Feb 2, 2009 at 3:01 PM, Brian Cox wrote: >>> How much memory do you have in your machine? What is work_mem set to? >> >> 32G; work_mem=64M > Hmm. Well then I'm not sure why you're running out of memory, It's the pending trigger list. He's got two trigger events

Re: [PERFORM] Deleting millions of rows

2009-02-02 Thread Scott Marlowe
On Mon, Feb 2, 2009 at 11:17 AM, Brian Cox wrote: > I'm using 8.3.5. Table ts_defects has 48M rows. Through psql: delete from > ts_defects; > Result: out of memory/Can't allocate size: 32 > I then did 10 or so deletes to get rid of the rows. Afterwards, inserts into > or queries on this > table pe

Re: [PERFORM] Deleting millions of rows

2009-02-02 Thread David Wilson
On Mon, Feb 2, 2009 at 3:37 PM, Brian Cox wrote: > David Wilson [david.t.wil...@gmail.com] wrote: > >> Is this table the target of any foreign keys? > > There are 2 "on delete cascade" FKs that reference this table. I believe that's the source of your memory issues. I think TRUNCATE may handle th

Re: [PERFORM] Deleting millions of rows

2009-02-02 Thread Brian Cox
David Wilson [david.t.wil...@gmail.com] wrote: Is this table the target of any foreign keys? There are 2 "on delete cascade" FKs that reference this table. Brian -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.pos

Re: [PERFORM] Deleting millions of rows

2009-02-02 Thread Dave Dutcher
> -Original Message- > From: Brian Cox > Subject: [PERFORM] Deleting millions of rows > > I'm using 8.3.5. Table ts_defects has 48M rows. Through psql: > delete from ts_defects; > Result: out of memory/Can't allocate size: 32 I then did 10 > or so deletes to get rid of the rows. Afterwar

Re: [PERFORM] Deleting millions of rows

2009-02-02 Thread David Wilson
On Mon, Feb 2, 2009 at 1:17 PM, Brian Cox wrote: > I'm using 8.3.5. Table ts_defects has 48M rows. Through psql: delete from > ts_defects; > Result: out of memory/Can't allocate size: 32 Is this table the target of any foreign keys? -- - David T. Wilson david.t.wil...@gmail.com -- Sent via pg

Re: [PERFORM] Deleting millions of rows

2009-02-02 Thread Robert Haas
On Mon, Feb 2, 2009 at 3:01 PM, Brian Cox wrote: >> How much memory do you have in your machine? What is work_mem set to? > > 32G; work_mem=64M Hmm. Well then I'm not sure why you're running out of memory, that seems like a bug. Taking a long time, I understand. Crashing, not so much. >> Did y

Re: [PERFORM] Deleting millions of rows

2009-02-02 Thread Brian Cox
Robert Haas [robertmh...@gmail.com] wrote: Thanks for your response. Does the table have triggers on it? Does it have indexes? What is the result of pg_relation_size() on that table? No triggers; 3 indexes cemdb=> select pg_relation_size('ts_defects'); pg_relation_size --

Re: [PERFORM] Deleting millions of rows

2009-02-02 Thread Jerry Champlin
Brian: One approach we use for large tables is to partition and then drop partitions as the data becomes obsolete. This way you never have the problem. Our general rule is to never delete data from a table because it is too slow. We have found this to be the preferred approach regardless of dat

Re: [PERFORM] Deleting millions of rows

2009-02-02 Thread Robert Haas
On Mon, Feb 2, 2009 at 1:17 PM, Brian Cox wrote: > I'm using 8.3.5. Table ts_defects has 48M rows. Through psql: delete from > ts_defects; > Result: out of memory/Can't allocate size: 32 > I then did 10 or so deletes to get rid of the rows. Afterwards, inserts into > or queries on this > table per

[PERFORM] Deleting millions of rows

2009-02-02 Thread Brian Cox
I'm using 8.3.5. Table ts_defects has 48M rows. Through psql: delete from ts_defects; Result: out of memory/Can't allocate size: 32 I then did 10 or so deletes to get rid of the rows. Afterwards, inserts into or queries on this table performed significantly slower. I tried a vacuum analyze, but