On Mon, Sep 27, 2010 at 10:50 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Robert Haas <robertmh...@gmail.com> writes:
>> According to the documentation, the maximum size of a large object is
>> 2 GB, which may be the reason for this behavior.
>
> In principle, since pg_largeobject stores an integer pageno, we could
> support large objects of up to LOBLKSIZE * 2^31 bytes = 4TB without any
> incompatible change in on-disk format.  This'd require converting a lot
> of the internal LO access logic to track positions as int64 not int32,
> but now that we require platforms to have working int64 that's no big
> drawback.  The main practical problem is that the existing lo_seek and
> lo_tell APIs use int32 positions.  I'm not sure if there's any cleaner
> way to deal with that than to add "lo_seek64" and "lo_tell64" functions,
> and have the existing ones throw error if asked to deal with positions
> past 2^31.
>
> In the particular case here, I think that lo_write may actually be
> writing past the 2GB boundary, while the coding in lo_read is a bit
> different and stops at the 2GB "limit".

Ouch.  Letting people write data to where they can't get it back from
seems double-plus ungood.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to