Sebastien-
I have a similar nightly process to keep our development system synched with
production. I just do a complete pg_dump of production, do a dropdb &
createdb to empty the database for development, and then restore the whole
db from the pg_dump file. Our database is about 12 GB currently,
Joe Conway <[EMAIL PROTECTED]> writes:
> Tom Lane wrote:
>> Ah-hah, I've sussed it ... you didn't actually change the storage
>> representation. You wrote:
> Yeah, I came to the same conclusion this morning (update longdna set dna
> = dna || '';), but it still seems that the chunked table is ver
From: "Tom Lane" <[EMAIL PROTECTED]>
> "Matthew T. O'Connor" <[EMAIL PROTECTED]> writes:
> > I chose to leave pg_autovacuum simple and not add too many features
because
> > the core team has said that it needs to be integrated into the backend
> > before it can be considered a core tool.
>
> I thin
"Matthew T. O'Connor" <[EMAIL PROTECTED]> writes:
> I chose to leave pg_autovacuum simple and not add too many features because
> the core team has said that it needs to be integrated into the backend
> before it can be considered a core tool.
I think actually it makes plenty of sense to enhance p
>> On Wednesday 06 August 2003 08:34, Yaroslav Mazurak wrote:
>
>> Version 7.3.4 is just out - probably worth upgrading as soon as it's
>> convenient.
>
> Has version 7.3.4 significant performance upgrade relative to 7.3.2?
> I've downloaded version 7.3.4, but not installed yet.
No, but ther
Tom Lane wrote:
Scott Cain <[EMAIL PROTECTED]> writes:
A few days ago, I asked for advice on speeding up substring queries on
the GENERAL mailing list. Joe Conway helpfully pointed out the ALTER
TABLE STORAGE EXTERNAL documentation. After doing the alter,
the queries got slower! Here is the bac
On 5 Aug 2003 at 8:09, Jeff wrote:
> I've been trying to search through the archives, but it hasn't been
> successful.
>
> We recently upgraded from pg7.0.2 to 7.3.4 and things were happy. I'm
> trying to fine tune things to get it running a bit better and I'm trying
> to figure out how vacuum ou
Joe,
Good idea, since I may not get around to profiling it this week. I
created a dump of the data set I was working with. It is available at
http://www.gmod.org/string_dump.bz2
Thanks,
Scott
On Mon, 2003-08-04 at 16:29, Joe Conway wrote:
> Is there a sample table schema and dataset available
Hello.
I have this problem: i'm running the
postgre 7.3 on a windows 2000 server with P3 1GHZ DUAL/1gb
ram with good performance. For best performance i have
change the server for a XEON 2.4/1gb ram and for my
suprise the performance decrease 80%. anybody have a similar
experience? does
Shridhar Daithankar wrote:
> I agree, specifying per table thresholds would be good in autovacuum..
Which begs the question of what the future direction is for pg_autovacuum.
There would be some merit to having pg_autovacuum throw in some tables
in which to store persistent information, and at
"Yaroslav Mazurak" <[EMAIL PROTECTED]>
> Problem is that SQL statement (see below) is running too long. With
> current WHERE clause 'SUBSTR(2, 2) IN ('NL', 'NM') return 25 records.
> With 1 record, SELECT time is about 50 minutes and takes approx. 120Mb
> RAM. With 25 records SELECT takes about
On Tue, 5 Aug 2003, Shridhar Daithankar wrote:
> On 5 Aug 2003 at 8:09, Jeff wrote:
>
> I would suggest autovacuum daemon which is in CVS contrib works for 7.3.x as
> well.. Or schedule a vacuum analyze every 15 minutes or so..
I've just got autovacum up and Since we have had a lot of
I was wondering if anyone found a sweet spot regarding how many inserts to
do in a single transaction to get the best performance? Is there an
approximate number where there isn't any more performance to be had or
performance may drop off?
It's just a general question...I don't have any specifi
13 matches
Mail list logo