Hello again, I've performed another test - I've migrated to the new cluster using dump restore: pg_dumpall | psql. It went well, although it took 6 days while pg_upgrade usually took a night.
Is there any hope the issue with pg_upgrade can be resolved? If not, could you give me some hints as to how can I decrease time needed for pg_dumpall | psql? Thanks in advance! 2018-06-13 0:11 GMT+03:00 Alexander Shutyaev <shuty...@gmail.com>: > Back again, > > >> Alexander, could you hack things up so autovacuum logging is enabled > >> (log_autovacuum_min_duration=0), and see whether it's triggered? > > I've changed this config setting in both 9.6 and 10.4 postgresql.conf, > then I've ran pg_upgrade once more. > > However I'm not sure how can I see whether autovacuum was triggered or > not. I've tried grepping the logs for lines containing both 'vacuum' and > 'auto' (case-insensitive) - there were none. If you can be more specific, I > can look for anything else. > > I've googled on how can one see that the autovacuum is working, and found > out this query, which I ran on the 10.4 cluster: > > select count(*) from pg_stat_all_tables where last_autovacuum is not null; > count > ------- > 0 > (1 row) > > So it seems autovacuum is indeed not working, just as you proposed. > > If I correctly summarized all your responses, the problem is that: > > 1) pg_restore (as part of pg_upgrade) inserts each large object in a > different transaction > > That seems true to me given the log output - each time an object is > inserted the wraparound warning decrements by 1 > > 2) the autovacuum doesn't work while the database is restored > > That also seems true (see above) > > 3) the number of large objects is so big that as they are restored the > transaction wraparound occurs > > Here's the number of large objects taken from the 9.6 cluster (spaces > added manually for clarity): > > select count(*) from pg_largeobject_metadata ; > count > ----------- > 133 635 871 > (1 row) > > If I've googled correctly - the transaction number is a 32bit integer so > it's limit is 2 147 483 647 which is a lot more. I guess I'm missing > something. > > This is just my attempt to summarize our progress so far. > > I'm further open to your suggestions. > > 2018-06-12 14:32 GMT+03:00 Daniel Verite <dan...@manitou-mail.org>: > >> Andres Freund wrote: >> >> > I'm not entirely clear why pg_restore appears to use a separate >> > transaction for each large object, surely exascerbating the problem. >> >> To make sure that per-object locks don't fill up the shared >> lock table? >> There might be hundreds of thousands of large objects. >> If it had to restore N objects per transaction, would it know >> how to compute N that is large enough to be effective >> and small enough not to exhaust the shared table? >> >> Best regards, >> -- >> Daniel Vérité >> PostgreSQL-powered mailer: http://www.manitou-mail.org >> Twitter: @DanielVerite >> > >