-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 03/29/07 23:56, Gerald Timothy G Quimpo wrote:
[snip]
>
> How do people take consistent backups of very large
> databases on Linux/FreeBSD? I'm aware of PITR, but
> might not be able to set aside a box with enough
> drives for it. LVM Snapshot?
On Thu, 2007-03-29 at 21:30 -0700, Benjamin Arai wrote:
> Rebuilding an index can't be the PostgreSQL solution for all
> cases. I am dealing with databases in the hundreds of gigs
> range and I am adding about 10gigs of data a week. At
> some point its going to take longer than a week to rebuil
On Thu, 2007-03-29 at 22:15 -0700, Benjamin Arai wrote:
> I have one system which I have used partitioning. For this particular
> case I have tons of data over about (50 years). What I did is wrote
> small loader that breaks data in tables based on date, so I have tables
> like abc_2000, abc_2
On Thu, 2007-03-29 at 21:30 -0700, Benjamin Arai wrote:
> Rebuilding an index can't be the PostgreSQL solution for all
> cases. I am dealing with databases in the hundreds of gigs
> range and I am adding about 10gigs of data a week. At
> some point its going to take longer than a week to rebuil
I have one system which I have used partitioning. For this particular
case I have tons of data over about (50 years). What I did is wrote
small loader that breaks data in tables based on date, so I have tables
like abc_2000, abc_2001 etc. The loading script is only a couple
hundred lines of
I agree, this is true if I cannot defer index updates. But if it is
possible to defer index updates until the end then I should be able to
achieve some sort of speedup. Rebuilding an index can't be the
PostgreSQL solution for all cases. I am dealing with databases in the
hundreds of gigs rang
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 03/29/07 18:35, Tom Lane wrote:
> Benjamin Arai <[EMAIL PROTECTED]> writes:
>> I would prefer not to drop the index because the database is several
>> hundred gigs. I would prefer to incrementally add to the index.
>
> This may well be false econ
Benjamin Arai <[EMAIL PROTECTED]> writes:
> I would prefer not to drop the index because the database is several
> hundred gigs. I would prefer to incrementally add to the index.
This may well be false economy. I don't have numbers at hand, but a
full rebuild can be substantially faster than ad
Benjamin Arai wrote:
> I would prefer not to drop the index because the database is several
> hundred gigs. I would prefer to incrementally add to the index.
I know of now way to do that in a batch, unless you go with partitioned
tables.
-
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 03/29/07 14:51, Benjamin Arai wrote:
> I would prefer not to drop the index because the database is several
> hundred gigs. I would prefer to incrementally add to the index.
Some RDBMSs (well, one that I know of) has the ability to defer
index upd
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 03/29/07 14:41, Bruce Momjian wrote:
> Benjamin Arai wrote:
>> So, is there a way to defer the index updating until a later period
>> of time. More specifically, I would like to do several COPIES to a
>> running database, then afterward force a
I would prefer not to drop the index because the database is several
hundred gigs. I would prefer to incrementally add to the index.
Benjamin
Bruce Momjian wrote:
Benjamin Arai wrote:
So, is there a way to defer the index updating until a later period
of time. More specifically, I would
Benjamin Arai wrote:
> So, is there a way to defer the index updating until a later period
> of time. More specifically, I would like to do several COPIES to a
> running database, then afterward force a update on the index via a
> vacuum or something similar.
Sure, drop the index, do the CO
So, is there a way to defer the index updating until a later period
of time. More specifically, I would like to do several COPIES to a
running database, then afterward force a update on the index via a
vacuum or something similar.
Benjamin
On Mar 29, 2007, at 1:03 AM, A. Kretschmer wrote:
So, is there a way to defer the index updating until a later period
of time. More specifically, I would like to do several COPIES to a
running database, then afterward force a update on the index via a
vacuum or something similar.
Benjamin
On Mar 29, 2007, at 1:03 AM, A. Kretschmer wrote:
am Thu, dem 29.03.2007, um 10:02:49 -0700 mailte Benjamin Arai folgendes:
> So, is there a way to defer the index updating until a later period
> of time. More specifically, I would like to do several COPIES to a
> running database, then afterward force a update on the index via a
> vacuum
"A. Kretschmer" <[EMAIL PROTECTED]> writes:
> am Thu, dem 29.03.2007, um 0:13:09 -0700 mailte Benjamin Arai folgendes:
>> If I have a PostgreSQL table with records and logical indexes already
>> created, if I use COPY to load additional data, does the COPY update
>> the indexes during, after,
am Thu, dem 29.03.2007, um 0:13:09 -0700 mailte Benjamin Arai folgendes:
> Hi,
>
> If I have a PostgreSQL table with records and logical indexes already
> created, if I use COPY to load additional data, does the COPY update
> the indexes during, after, or not at all?
after, i think.
test=#
18 matches
Mail list logo