Bruce,

* Bruce Momjian (br...@momjian.us) wrote:
> As part of PGConf.Asia 2017 in Tokyo, we had an unconference topic about
> zero-downtime upgrades.  After the usual discussion of using logical
> replication, Slony, and perhaps having the server be able to read old
> and new system catalogs, we discussed speeding up pg_upgrade.

Sounds familiar.

> There are clusters that take a long time to dump the schema from the old
> cluster and recreate it in the new cluster.  One idea of speeding up
> pg_upgrade would be to allow pg_upgrade to be run in two stages:
> 
> 1.  prevent system catalog changes while the old cluster is running, and
> dump the old cluster's schema and restore it in the new cluster
> 
> 2.  shut down the old cluster and copy/link the data files

Perhaps a bit more complicated, but couldn't we copy/link while the
old cluster is online and in backup mode, finish backup mode, shut down
the old cluster, and then play forward the WAL to catch any relation
extents being added or similar, and then flip to the new PG version?

> My question is whether the schema dump/restore is time-consuming enough
> to warrant this optional more complex API, and whether people would
> support adding a server setting that prevented all system table changes?

When you say 'system table changes', you're referring to basically all
DDL, right?  Just wish to clarify as there might be some confusion
between the terminology you're using here and allow_system_table_mods.

Would we need to have autovacuum shut down too..?

The other concern is if there's changes made to the catalogs by non-DDL
activity that needs to be addressed too (logical replication?); nothing
definite springs to mind off-hand for me, but perhaps others will think
of things.

Thanks!

Stephen

Attachment: signature.asc
Description: Digital signature

Reply via email to