Sven Willenberger wrote:
> Trying to determine the best overall approach for the following
> scenario:
>
> Each month our primary table accumulates some 30 million rows (which
> could very well hit 60+ million rows per month by year's end). Basically
> there will end up being a lot of historical d
Sven Willenberger wrote:
On Tue, 2005-03-01 at 09:48 -0600, John Arbash Meinel wrote:
Sven Willenberger wrote:
Trying to determine the best overall approach for the following
scenario:
Each month our primary table accumulates some 30 million rows (which
could very well hit 60+ million rows per mo
On Tue, 2005-03-01 at 09:48 -0600, John Arbash Meinel wrote:
> Sven Willenberger wrote:
>
> >Trying to determine the best overall approach for the following
> >scenario:
> >
> >Each month our primary table accumulates some 30 million rows (which
> >could very well hit 60+ million rows per month by
Sven Willenberger wrote:
Trying to determine the best overall approach for the following
scenario:
Each month our primary table accumulates some 30 million rows (which
could very well hit 60+ million rows per month by year's end). Basically
there will end up being a lot of historical data with litt
Sven Willenberger <[EMAIL PROTECTED]> writes:
> 3) Each month:
> CREATE newmonth_dynamically_named_table (like mastertable) INHERITS
> (mastertable);
> modify the copy.sql script to copy newmonth_dynamically_named_table;
> pg_dump 3monthsago_dynamically_named_table for archiving;
> drop table 3mont
Trying to determine the best overall approach for the following
scenario:
Each month our primary table accumulates some 30 million rows (which
could very well hit 60+ million rows per month by year's end). Basically
there will end up being a lot of historical data with little value
beyond archival