My application uses a bytea column to store some fairly large binary values (hundreds of megabytes).

Recently I've run into a problem as my values start to approach the 1GB limit on field size:

When I write a 955MB byte array from Java into my table from JDBC, the write succeeds and the numbers look about right:

testdb=# select count(*) from problem_table;
 count
-------
     1
(1 row)

testdb=# select pg_size_pretty(pg_total_relation_size('problem_table'));
 pg_size_pretty
----------------
 991 MB
(1 row)

However, any attempt to read this row back fails:

testdb=# select * from problem_table;
ERROR:  invalid memory alloc request size 2003676411

The same error occurs when reading from JDBC (even using getBinaryStream).

Is there some reason why my data can be stored in <1GB but triggers the allocation of 2GB of memory when I try to read it back? Is there any setting I can change or any alternate method of reading I can use to get around this?

Thanks,

--
David North, Software Developer, CoreFiling Limited
http://www.corefiling.com
Phone: +44-1865-203192


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to