On 07/04/2018 12:31 AM, David Rowley wrote:
On 4 July 2018 at 14:43, Andy Colson <a...@squeakycode.net> wrote:
I moved a physical box to a VM, and set its memory to 1Gig.  Everything
runs fine except one backup:


/pub/backup# pg_dump -Fc -U postgres -f wildfire.backup wildfirep

g_dump: Dumping the contents of table "ofrrds" failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR:  out of memory
DETAIL:  Failed on request of size 1073741823.> pg_dump: The command was: COPY 
public.ofrrds (id, updateddate, bytes) TO
stdout;

There will be less memory pressure on the server if the pg_dump was
performed from another host. When running pg_dump locally the 290MB
bytea value will be allocated in both the backend process pg_dump is
using and pg_dump itself. Running the backup remotely won't require
the latter to be allocated on the server.

I've been reducing my memory settings:

maintenance_work_mem = 80MB
work_mem = 5MB
shared_buffers = 200MB

You may also get it to work by reducing shared_buffers further.
work_mem won't have any affect, neither will maintenance_work_mem.

Failing that, the suggestions of more RAM and/or swap look good.


Adding more ram to the vm is the simplest option.  I just seems a waste cuz of 
one backup.

Thanks all.

-Andy

Reply via email to