Re: [GENERAL] Table growing faster than autovacuum can vacuum

2012-03-24 Thread Scott Marlowe
On Sat, Mar 24, 2012 at 9:40 PM, Jasen Betts wrote: > > have you tried using COPY instead of INSERT (you'll have to insert > into the correct partition) triggers fire on copy, but rules do not. So if he has partitioning triggers they'll fire on the parent table etc. HOWEVER, that'll be slower t

Re: [GENERAL] Table growing faster than autovacuum can vacuum

2012-03-24 Thread Jasen Betts
On 2012-02-15, Asher Hoskins wrote: > Hello. > > I've got a database with a very large table (currently holding 23.5 > billion rows, the output of various data loggers over the course of my > PhD so far). The table itself has a trivial structure (see below) and is > partitioned by data time/dat

Re: [GENERAL] Table growing faster than autovacuum can vacuum

2012-02-15 Thread Scott Marlowe
On Wed, Feb 15, 2012 at 12:38 PM, John R Pierce wrote: > so, your ~ monthly batch run could be something like... > >    create new partition table >    copy/insert your 1-2 billion rows >    vacuum analyze (NOT full) new table >    vacuum freeze new table >    update master partition table rules

Re: [GENERAL] Table growing faster than autovacuum can vacuum

2012-02-15 Thread John R Pierce
On 02/15/12 8:46 AM, Asher Hoskins wrote: I've got a database with a very large table (currently holding 23.5 billion rows, a table that large should probably be partitioned, likely by time. maybe a partition for each month. as each partition is filled, it can be VACUUM FREEZE'd since it w

Re: [GENERAL] Table growing faster than autovacuum can vacuum

2012-02-15 Thread Marti Raudsepp
On Wed, Feb 15, 2012 at 19:25, Marti Raudsepp wrote: > VACUUM FULL is extremely inefficient in PostgreSQL 8.4 and older. Oh, a word of warning, PostgreSQL 9.0+ has a faster VACUUM FULL implementation, but it now requires twice the disk space of your table size, during the vacuum process. Regards

Re: [GENERAL] Table growing faster than autovacuum can vacuum

2012-02-15 Thread Marti Raudsepp
On Wed, Feb 15, 2012 at 18:46, Asher Hoskins wrote: > My problem is that the autovacuum system isn't keeping up with INSERTs and I > keep running out of transaction IDs. This is usually not a problem with vacuum, but a problem with consuming too many transaction IDs. I suspect you're loading that

[GENERAL] Table growing faster than autovacuum can vacuum

2012-02-15 Thread Asher Hoskins
Hello. I've got a database with a very large table (currently holding 23.5 billion rows, the output of various data loggers over the course of my PhD so far). The table itself has a trivial structure (see below) and is partitioned by data time/date and has quite acceptable INSERT/SELECT perfo