Tom Lane wrote:
Gavin Sherry <[EMAIL PROTECTED]> writes:
There's an apparently arbitary limit of 10,000,000 bytes in twophase.c
on the size of a two phase commit file. I can't see why this limit
exists.

The comment seems perfectly clear about why the limit exists:

     * Check file length.  We can determine a lower bound pretty easily. We
     * set an upper bound mainly to avoid palloc() failure on a corrupt file.

although certainly the specific value has been picked out of the air.

I'm surprised the twophase file grows that big. If you exceed 10MB by dropping 25000 objects, that's 400 bytes per object. A TwoPhaseLockRecord is only 20-24 bytes + TwoPhaseRecordOnDisk header which is 8 bytes, so what's taking so much space?

[tests...] It looks like it's the shared cache invalidation messages that make up the rest of the bulk. They could probably be stored in a more dense format, we currently store each invalidation message as a separate twophase-record. I don't think I want to spend time on that myself, but if someone cares enough to write a patch I'll be happy to review it.

Perhaps it'd be better to use malloc() than palloc(), so that we'd not
lose control on out-of-memory, and then deem the file "too big" only
if we couldn't malloc the space.

That doesn't seem much better to me than just letting the palloc fail.

If we're going to check for file length, we should definitely check the file length when we write it, so that we fail at PREPARE time, and not at COMMIT time.

I'll write up a patch along the lines of:
1. increase the limit to, say, 100 MB
2. Check the file length at PREPARE time.

Or we could try to fix things so that the file doesn't have to be all in
memory, but that seems pretty invasive.

Yep. And definitely not back-patchable.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to