Magnus Hagander <mag...@hagander.net> writes:
> On Mon, Mar 8, 2021 at 5:33 PM Tom Lane <t...@sss.pgh.pa.us> wrote:
>> It does seem that --single-transaction is a better idea than fiddling with
>> the transaction wraparound parameters, since the latter is just going to
>> put off the onset of trouble.  However, we'd have to do something about
>> the lock consumption.  Would it be sane to have the backend not bother to
>> take any locks in binary-upgrade mode?

> I believe the problem occurs when writing them rather than when
> reading them, and I don't think we have a binary upgrade mode there.

You're confusing pg_dump's --binary-upgrade switch (indeed applied on
the dumping side) with the backend's -b switch (IsBinaryUpgrade,
applied on the restoring side).

> We could invent one of course. Another option might be to exclusively
> lock pg_largeobject, and just say that if you do that, we don't have
> to lock the individual objects (ever)?

What was in the back of my mind is that we've sometimes seen complaints
about too many locks needed to dump or restore a database with $MANY
tables; so the large-object case seems like just a special case.

The answer up to now has been "raise max_locks_per_transaction enough
so you don't see the failure".  Having now consumed a little more
caffeine, I remember that that works in pg_upgrade scenarios too,
since the user can fiddle with the target cluster's postgresql.conf
before starting pg_upgrade.

So it seems like the path of least resistance is

(a) make pg_upgrade use --single-transaction when calling pg_restore

(b) document (better) how to get around too-many-locks failures.

                        regards, tom lane


Reply via email to