>>  Is this the regular Postgres log or the pg_upgrade log which should be
something like pg_upgrade_server.log?

This is the pg_upgrade_dump_16400.log.

>>  How did you get into the 10 cluster to report on the database OID's and
names?

After the pg_upgrade failed I was able to start both clusters, so I
connected to the new 10.4 cluster and ran the query.

>>  Which database has the large objects?

bof (OID=16400). It is also effectively the only database that matters
here. The other one - sslentry only contains a couple of tables and a dozen
of records.

>>  Did you check this view to confirm?

Yes, I did:

select * from pg_prepared_xacts;
 transaction | gid | prepared | owner | database
-------------+-----+----------+-------+----------
(0 rows)


2018-06-11 3:15 GMT+03:00 Adrian Klaver <adrian.kla...@aklaver.com>:

> On 06/10/2018 02:45 PM, Alexander Shutyaev wrote:
>
> Comments inline.
>
> The error log is like this. Here's its tail:
>>
>
> Is this the regular Postgres log or the pg_upgrade log which should be
> something like pg_upgrade_server.log?
>
>
> pg_restore: [archiver (db)] could not execute query: ERROR:  database is
>> not accepting commands to avoid wraparound data loss in database with OID 0
>> HINT:  Stop the postmaster and vacuum that database in single-user mode.
>>
>
> How did you get into the 10 cluster to report on the database OID's and
> names?
>
> You might also need to commit or roll back old prepared transactions.
>>      Command was: ALTER LARGE OBJECT 1740737402 OWNER TO bof_user;
>>
>> Before that there is a lot of similar messages - the only things
>> chainging are the "executing BLOB nnn" number and "must be vacuumed within
>> nnn transactions" number.
>>
>>
> Which database has the large objects?
>
> As for the prepared transactions - no, I don't have them, our application
>> doesn't use this functionality.
>>
>
> Did you check this view to confirm?:
>
> https://www.postgresql.org/docs/10/static/view-pg-prepared-xacts.html
>
> Just trying to eliminate possibilities.
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>

Reply via email to