m.vankl...@vision-development.nl writes:
> When running pg_dump it crashes with the following output:

> pg_dump: dumping contents of table crm_department
> pg_dump: dumping contents of table crm_department_article
> pg_dump: dumping contents of table crm_department_extra_info
> pg_dump: dumping contents of table crm_department_output
> pg_dump: SQL command failed
> pg_dump: Error message from server: ERROR:  out of memory
> DETAIL:  Failed on request of size 536870912.

That doesn't look like a crash, it looks like out-of-memory.

> We don't think this is corruption of data because when running the same COPY
> command from pgAdmin it also crashes with the same error. But when adding
> WITH BINARY to this command it runs okay. This table contains extremely wide
> columns, around 130 megabytes.

There is a limit to the size of column you can manipulate without
running out of memory, and it is context-dependent, so this behavior is
neither surprising nor a bug.  The reason COPY is failing while COPY
BINARY doesn't is that the former requires multiple transient copies
of data during I/O conversion, output line formatting, etc.  At a couple
hundred megabytes apiece that'll add up pretty quickly.

If you don't want to reconsider your data storage layout, a possible
solution is to move to a 64-bit build of Postgres, so that you're not up
against a 4GB total address space limit.  (I'm not sure that 8.4.x
supports 64-bit Windows, but you could definitely get such a build in
more recent PG versions.)  Pending such a fix, you might be able to make
more space available for COPY workspace by reducing shared_buffers and
other settings that control consumption of shared memory space.

                        regards, tom lane


-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Reply via email to