At 23:36 2/07/00 +1000, Martijn van Oosterhout wrote:
>
>Some version that came with redhat (6.5.x) (i didn't
>install this machine). I'll grab it and see if it
>works
I'll need to put the 6.5 version on the FTP site first...
Philip Warner wrote:
>
> At 23:34 1/07/00 +1000, Martijn van Oosterhout wrote:
> >
> >> Philip Warner needs alpha testers for his new version of pg_dump ;-).
> >> Unfortunately I think he's only been talking about it on pghackers
> >> so far.
> >
> >What versions does it work on?
> >
>
> 6.5.x a
At 23:34 1/07/00 +1000, Martijn van Oosterhout wrote:
>
>> Philip Warner needs alpha testers for his new version of pg_dump ;-).
>> Unfortunately I think he's only been talking about it on pghackers
>> so far.
>
>What versions does it work on?
>
6.5.x and 7.0.x.
Which version are you running?
Tom Lane wrote:
> COPY uses a streaming style of output. To generate INSERT commands,
> pg_dump first does a "SELECT * FROM table", and that runs into libpq's
> suck-the-whole-result-set-into-memory behavior. See nearby thread
> titled "Large Tables(>1 Gb)".
Hmm, any reason why pg_dump couldn't
Martijn van Oosterhout <[EMAIL PROTECTED]> writes:
> Is there a better way? Here pg_dumping the DB takes over half an hour
> (mainly because pg_dump chews all available memory).
pg_dump shouldn't be a performance hog if you are using the default
COPY-based style of data export. I'd only expect m
er way? Here pg_dumping the DB takes over half an hour
(mainly because pg_dump chews all available memory). It would be nicer
if we knew that tarring it up would work also...
> - Original Message -
> From: "mikeo" <[EMAIL PROTECTED]>
> Subject: [GENERAL] disk backups
>