Hi,

On 2021-04-23 19:28:27 -0500, Justin Pryzby wrote:
> This (combination of) thread(s) seems relevant.
> 
> Subject: pg_upgrade failing for 200+ million Large Objects
> https://www.postgresql.org/message-id/flat/12601596dbbc4c01b86b4ac4d2bd4d48%40EX13D05UWC001.ant.amazon.com
> https://www.postgresql.org/message-id/flat/a9f9376f1c3343a6bb319dce294e20ac%40EX13D05UWC001.ant.amazon.com
> https://www.postgresql.org/message-id/flat/cc089cc3-fc43-9904-fdba-d830d8222145%40enterprisedb.com#3eec85391c6076a4913e96a86fece75e

Huh. Thanks for digging these up.


> > Allows the user to provide a constant via pg_upgrade command-line, that
> >overrides the 2 billion constant in pg_resetxlog [1] thereby increasing the
> >(window of) Transaction IDs available for pg_upgrade to complete.

That seems the entirely the wrong approach to me, buying further into
the broken idea of inventing random wrong values for oldestXid.

We drive important things like the emergency xid limits off oldestXid. On
databases with tables that are older than ~147million xids (i.e. not even
affected by the default autovacuum_freeze_max_age) the current constant leads
to setting the oldestXid to a value *in the future*/wrapped around. Any
different different constant (or pg_upgrade parameter) will do that too in
other scenarios.

As far as I can tell there is precisely *no* correct behaviour here other than
exactly copying the oldestXid limit from the source database.

Greetings,

Andres Freund


Reply via email to