Andrew Dunstan <and...@dunslane.net> writes: > On 03/26/2012 01:34 PM, Tom Lane wrote: >> Hm. The test case is just a straight pg_restore of lots and lots of LOs? >> What pg_dump version was the dump made with?
> 8.4.8, same as the target. We get the same issue whether we restore > direct to the database from pg_restore or via a text dump. I believe I see the issue: when creating/loading LOs, we first do a lo_create (which in 8.4 makes a "page zero" tuple in pg_largeobject containing zero bytes of data) and then lo_write, which will do a heap_update to overwrite that tuple with data. This is at the next command in the same transaction, so the original tuple has to receive a combo CID. Net result: we accumulate one new combo CID per large object loaded in the same transaction. You can reproduce this without any pg_dump involvement at all, using something like create table mylos (id oid); insert into mylos select lo_import('/tmp/junk') from generate_series(1,1000000); The problem is gone in 9.0 and up because now we use a pg_largeobject_metadata entry instead of a pg_largeobject row to flag the existence of an empty large object. I don't see any very practical backend fix for the problem in 8.x. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers