e sujets à la manipulation, nous ne pouvons accepter aucune responsabilité
pour le contenu fourni.
> Subject: Re: [GENERAL] Out of memory on pg_dump
> Date: Fri, 21 Aug 2009 11:29:48 -0400
> From: chopk...@cra.com
> To: t...@sss.pgh.pa.us
> CC: pgsql-general@postgresql.org
>
"Chris Hopkins" writes:
> Thanks Tom. Next question (and sorry if this is an ignorant one)...how
> would I go about doing that?
See the archives for previous discussions of corrupt-data recovery.
Basically it's divide-and-conquer to find the corrupt rows.
regards, tom lan
[mailto:t...@sss.pgh.pa.us]
Sent: Friday, August 21, 2009 11:07 AM
To: Chris Hopkins
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Out of memory on pg_dump
"Chris Hopkins" writes:
> 2009-08-19 22:35:42 ERROR: out of memory
> 2009-08-19 22:35:42 DETAIL: Failed on
"Chris Hopkins" writes:
> 2009-08-19 22:35:42 ERROR: out of memory
> 2009-08-19 22:35:42 DETAIL: Failed on request of size 536870912.
> Is there an easy way to give pg_dump more memory?
That isn't pg_dump that's out of memory --- it's a backend-side message.
Unless you've got extremely wide fi
Hi all -
We are using Postgres 8.2.3 as our Confluence backing store and when
trying to backup the database at night we are seeing this in the logs:
pg_amop_opc_strat_index: 1024 total in 1 blocks; 216 free (0 chunks);
808 used
pg_aggregate_fnoid_index: 1024 total in 1 blocks; 392 free (