On Fri, Jan 24, 2025, at 8:12 PM, Masahiko Sawada wrote: > Here are some comments on v2 patch: > > --- > /* Report this after the initial starting message for consistency. */ > - if (max_replication_slots == 0) > + if (max_replication_origins == 0) > ereport(ERROR, > > (errcode(ERRCODE_CONFIGURATION_LIMIT_EXCEEDED), > errmsg("cannot start logical > replication workers when \"max_replication_slots\"=0"))); > > Need to update the error message too.
Good catch! > --- > + {"max_replication_origins", > + PGC_POSTMASTER, > + REPLICATION_SUBSCRIBERS, > + gettext_noop("Sets the maximum number of > simultaneously defined replication origins."), > + NULL > + }, > > I think the description is not accurate; this GUC controls the maximum > number of simultaneous replication origins that can be setup. Instead of "defined" I use "configured". It seems closer to "setup". > --- > Given that max_replication_origins doesn't control the maximum number > of replication origins that can be defined, probably we need to find a > better name. As Kuroda-san already mentioned some proposed names, > max_tracked_replication_origins or max_replication_origin_states seem > reasonable to me. The max_replication_origins name is not accurate. I chose it because (a) it is a runtime limit, (b) it is short and (c) a description can provide the exact meaning. I think the proposed names don't still reflect the exact meaning. The "tracked" word provides the meaning that the replication origin is tracked but in this case it should mean "setup". An existing replication origin that is not in use is tracked although its information is not available in the pg_replication_origin_status. The "states" word doesn't make sense in this context. Do you mean "status" (same as the view name)? Under reflection, an accurate name is max_replication_origin_session_setup. A counter argument is that it is a long name (top-5 length). postgres=# select n, length(n) from (values('max_replication_origins'), ('max_tracked_replication_origins'),('max_replication_origin_states'), ('max_replication_origin_session_setup')) as gucs(n); n | length --------------------------------------+-------- max_replication_origins | 23 max_tracked_replication_origins | 31 max_replication_origin_states | 29 max_replication_origin_session_setup | 36 (4 rows) postgres=# select name, length(name) from pg_settings order by 2 desc limit 15; name | length ---------------------------------------------+-------- max_parallel_apply_workers_per_subscription | 43 ssl_passphrase_command_supports_reload | 38 autovacuum_vacuum_insert_scale_factor | 37 autovacuum_multixact_freeze_max_age | 35 debug_logical_replication_streaming | 35 idle_in_transaction_session_timeout | 35 autovacuum_vacuum_insert_threshold | 34 log_parameter_max_length_on_error | 33 vacuum_multixact_freeze_table_age | 33 max_sync_workers_per_subscription | 33 client_connection_check_interval | 32 max_parallel_maintenance_workers | 32 shared_memory_size_in_huge_pages | 32 restrict_nonsystem_relation_kind | 32 autovacuum_analyze_scale_factor | 31 (15 rows) > --- > +#include "utils/guc_hooks.h" > > I think #include'ing guc.h would be more appropriate. Fixed. I also updated the pg_createsubscriber documentation that refers to max_replication_slots. Since we don't have an agreement about the name, I still kept max_replication_origins. -- Euler Taveira EDB https://www.enterprisedb.com/
From ece93307b4085595a3610f0d53d28df1d3b9a76f Mon Sep 17 00:00:00 2001 From: Euler Taveira <eu...@eulerto.com> Date: Tue, 3 Sep 2024 12:10:20 -0300 Subject: [PATCH v3] Separate GUC for replication origins This feature already exists but it is provided by an existing GUC: max_replication_slots. The new GUC (max_replication_origins) defines the maximum number of replication origins that can be configured simultaneously. The max_replication_slots was used for this purpose but it is confusing (when you are learning about logical replication) and introduces a limitation (you cannot have a small number of replication slots and a high number of subscriptions). For backward compatibility, the default is -1, indicating that the value of max_replication_slots is used instead. --- doc/src/sgml/config.sgml | 27 ++---- doc/src/sgml/logical-replication.sgml | 6 +- doc/src/sgml/ref/pg_createsubscriber.sgml | 2 +- src/backend/replication/logical/launcher.c | 6 +- src/backend/replication/logical/origin.c | 91 ++++++++++++------- src/backend/utils/misc/guc_tables.c | 12 +++ src/backend/utils/misc/postgresql.conf.sample | 3 + src/bin/pg_basebackup/pg_createsubscriber.c | 18 ++-- .../t/040_pg_createsubscriber.pl | 4 +- src/bin/pg_upgrade/check.c | 14 +-- src/bin/pg_upgrade/t/004_subscription.pl | 14 +-- src/include/replication/origin.h | 3 + 12 files changed, 113 insertions(+), 87 deletions(-) diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index a782f109982..0db28d81538 100644 --- a/doc/src/sgml/config.sgml +++ b/doc/src/sgml/config.sgml @@ -4350,13 +4350,6 @@ restore_command = 'copy "C:\\server\\archivedir\\%f" "%p"' # Windows to <literal>replica</literal> or higher to allow replication slots to be used. </para> - - <para> - Note that this parameter also applies on the subscriber side, but with - a different meaning. See <xref linkend="guc-max-replication-slots-subscriber"/> - in <xref linkend="runtime-config-replication-subscriber"/> for more - details. - </para> </listitem> </varlistentry> @@ -5062,10 +5055,10 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" <variablelist> - <varlistentry id="guc-max-replication-slots-subscriber" xreflabel="max_replication_slots"> - <term><varname>max_replication_slots</varname> (<type>integer</type>) + <varlistentry id="guc-max-replication-origins" xreflabel="max_replication_origins"> + <term><varname>max_replication_origins</varname> (<type>integer</type>) <indexterm> - <primary><varname>max_replication_slots</varname> configuration parameter</primary> + <primary><varname>max_replication_origins</varname> configuration parameter</primary> <secondary>in a subscriber</secondary> </indexterm> </term> @@ -5077,18 +5070,14 @@ ANY <replaceable class="parameter">num_sync</replaceable> ( <replaceable class=" be created on the server. Setting it to a lower value than the current number of tracked replication origins (reflected in <link linkend="view-pg-replication-origin-status">pg_replication_origin_status</link>) - will prevent the server from starting. - <literal>max_replication_slots</literal> must be set to at least the + will prevent the server from starting. It defaults to -1, indicating + that the value of <xref linkend="guc-max-replication-slots"/> should be + used instead. This parameter can only be set at server start. + + <literal>max_replication_origins</literal> must be set to at least the number of subscriptions that will be added to the subscriber, plus some reserve for table synchronization. </para> - - <para> - Note that this parameter also applies on a sending server, but with - a different meaning. See <xref linkend="guc-max-replication-slots"/> - in <xref linkend="runtime-config-replication-sender"/> for more - details. - </para> </listitem> </varlistentry> diff --git a/doc/src/sgml/logical-replication.sgml b/doc/src/sgml/logical-replication.sgml index 613abcd28b7..508bafbbdf6 100644 --- a/doc/src/sgml/logical-replication.sgml +++ b/doc/src/sgml/logical-replication.sgml @@ -2371,9 +2371,7 @@ CONTEXT: processing remote data for replication origin "pg_16395" during "INSER <para> Logical replication requires several configuration options to be set. Most - options are relevant only on one side of the replication. However, - <varname>max_replication_slots</varname> is used on both the publisher and - the subscriber, but it has a different meaning for each. + options are relevant only on one side of the replication. </para> <sect2 id="logical-replication-config-publisher"> @@ -2408,7 +2406,7 @@ CONTEXT: processing remote data for replication origin "pg_16395" during "INSER <title>Subscribers</title> <para> - <link linkend="guc-max-replication-slots-subscriber"><varname>max_replication_slots</varname></link> + <link linkend="guc-max-replication-origins"><varname>max_replication_origins</varname></link> must be set to at least the number of subscriptions that will be added to the subscriber, plus some reserve for table synchronization. </para> diff --git a/doc/src/sgml/ref/pg_createsubscriber.sgml b/doc/src/sgml/ref/pg_createsubscriber.sgml index 26b8e64a4e0..70931a5495f 100644 --- a/doc/src/sgml/ref/pg_createsubscriber.sgml +++ b/doc/src/sgml/ref/pg_createsubscriber.sgml @@ -295,7 +295,7 @@ PostgreSQL documentation <para> The target server must be used as a physical standby. The target server - must have <xref linkend="guc-max-replication-slots"/> and <xref + must have <xref linkend="guc-max-replication-origins"/> and <xref linkend="guc-max-logical-replication-workers"/> configured to a value greater than or equal to the number of specified databases. The target server must have <xref linkend="guc-max-worker-processes"/> configured to a diff --git a/src/backend/replication/logical/launcher.c b/src/backend/replication/logical/launcher.c index a3c7adbf1a8..5025bc1c8c4 100644 --- a/src/backend/replication/logical/launcher.c +++ b/src/backend/replication/logical/launcher.c @@ -31,7 +31,7 @@ #include "postmaster/bgworker.h" #include "postmaster/interrupt.h" #include "replication/logicallauncher.h" -#include "replication/slot.h" +#include "replication/origin.h" #include "replication/walreceiver.h" #include "replication/worker_internal.h" #include "storage/ipc.h" @@ -325,10 +325,10 @@ logicalrep_worker_launch(LogicalRepWorkerType wtype, subname))); /* Report this after the initial starting message for consistency. */ - if (max_replication_slots == 0) + if (max_replication_origins == 0) ereport(ERROR, (errcode(ERRCODE_CONFIGURATION_LIMIT_EXCEEDED), - errmsg("cannot start logical replication workers when \"max_replication_slots\"=0"))); + errmsg("cannot start logical replication workers when \"max_replication_origins\"=0"))); /* * We need to do the modification of the shared memory under lock so that diff --git a/src/backend/replication/logical/origin.c b/src/backend/replication/logical/origin.c index 1b586cb1cf2..dfdece081cd 100644 --- a/src/backend/replication/logical/origin.c +++ b/src/backend/replication/logical/origin.c @@ -90,6 +90,7 @@ #include "storage/lmgr.h" #include "utils/builtins.h" #include "utils/fmgroids.h" +#include "utils/guc.h" #include "utils/pg_lsn.h" #include "utils/rel.h" #include "utils/snapmgr.h" @@ -99,6 +100,9 @@ #define PG_REPLORIGIN_CHECKPOINT_FILENAME PG_LOGICAL_DIR "/replorigin_checkpoint" #define PG_REPLORIGIN_CHECKPOINT_TMPFILE PG_REPLORIGIN_CHECKPOINT_FILENAME ".tmp" +/* GUC variables */ +int max_replication_origins = -1; + /* * Replay progress of a single remote node. */ @@ -151,7 +155,7 @@ typedef struct ReplicationStateCtl { /* Tranche to use for per-origin LWLocks */ int tranche_id; - /* Array of length max_replication_slots */ + /* Array of length max_replication_origins */ ReplicationState states[FLEXIBLE_ARRAY_MEMBER]; } ReplicationStateCtl; @@ -162,10 +166,7 @@ TimestampTz replorigin_session_origin_timestamp = 0; /* * Base address into a shared memory array of replication states of size - * max_replication_slots. - * - * XXX: Should we use a separate variable to size this rather than - * max_replication_slots? + * max_replication_origins. */ static ReplicationState *replication_states; @@ -186,12 +187,12 @@ static ReplicationState *session_replication_state = NULL; #define REPLICATION_STATE_MAGIC ((uint32) 0x1257DADE) static void -replorigin_check_prerequisites(bool check_slots, bool recoveryOK) +replorigin_check_prerequisites(bool check_origins, bool recoveryOK) { - if (check_slots && max_replication_slots == 0) + if (check_origins && max_replication_origins == 0) ereport(ERROR, (errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE), - errmsg("cannot query or manipulate replication origin when \"max_replication_slots\" is 0"))); + errmsg("cannot query or manipulate replication origin when \"max_replication_origins\" is 0"))); if (!recoveryOK && RecoveryInProgress()) ereport(ERROR, @@ -352,7 +353,7 @@ replorigin_state_clear(RepOriginId roident, bool nowait) restart: LWLockAcquire(ReplicationOriginLock, LW_EXCLUSIVE); - for (i = 0; i < max_replication_slots; i++) + for (i = 0; i < max_replication_origins; i++) { ReplicationState *state = &replication_states[i]; @@ -511,18 +512,38 @@ ReplicationOriginShmemSize(void) { Size size = 0; - /* - * XXX: max_replication_slots is arguably the wrong thing to use, as here - * we keep the replay state of *remote* transactions. But for now it seems - * sufficient to reuse it, rather than introduce a separate GUC. - */ - if (max_replication_slots == 0) + if (max_replication_origins == 0) return size; + /* + * Prior to PostgreSQL 18, max_replication_slots was used to set the + * number of replication origins. For backward compatibility, -1 indicates + * to use the fallback value (max_replication_slots). + */ + if (max_replication_origins == -1) + { + char buf[32]; + + snprintf(buf, sizeof(buf), "%d", max_replication_slots); + SetConfigOption("max_replication_origins", buf, + PGC_POSTMASTER, PGC_S_DYNAMIC_DEFAULT); + + /* + * We prefer to report this value's source as PGC_S_DYNAMIC_DEFAULT. + * However, if the DBA explicitly set max_replication_origins = -1 in + * the config file, then PGC_S_DYNAMIC_DEFAULT will fail to override + * that and we must force the matter with PGC_S_OVERRIDE. + */ + if (max_replication_origins == -1) /* failed to apply it? */ + SetConfigOption("max_replication_origins", buf, + PGC_POSTMASTER, PGC_S_OVERRIDE); + } + Assert(max_replication_origins != -1); + size = add_size(size, offsetof(ReplicationStateCtl, states)); size = add_size(size, - mul_size(max_replication_slots, sizeof(ReplicationState))); + mul_size(max_replication_origins, sizeof(ReplicationState))); return size; } @@ -531,7 +552,7 @@ ReplicationOriginShmemInit(void) { bool found; - if (max_replication_slots == 0) + if (max_replication_origins == 0) return; replication_states_ctl = (ReplicationStateCtl *) @@ -548,7 +569,7 @@ ReplicationOriginShmemInit(void) replication_states_ctl->tranche_id = LWTRANCHE_REPLICATION_ORIGIN_STATE; - for (i = 0; i < max_replication_slots; i++) + for (i = 0; i < max_replication_origins; i++) { LWLockInitialize(&replication_states[i].lock, replication_states_ctl->tranche_id); @@ -570,7 +591,7 @@ ReplicationOriginShmemInit(void) * * So its just the magic, followed by the statically sized * ReplicationStateOnDisk structs. Note that the maximum number of - * ReplicationState is determined by max_replication_slots. + * ReplicationState is determined by max_replication_origins. * --------------------------------------------------------------------------- */ void @@ -583,7 +604,7 @@ CheckPointReplicationOrigin(void) uint32 magic = REPLICATION_STATE_MAGIC; pg_crc32c crc; - if (max_replication_slots == 0) + if (max_replication_origins == 0) return; INIT_CRC32C(crc); @@ -625,7 +646,7 @@ CheckPointReplicationOrigin(void) LWLockAcquire(ReplicationOriginLock, LW_SHARED); /* write actual data */ - for (i = 0; i < max_replication_slots; i++) + for (i = 0; i < max_replication_origins; i++) { ReplicationStateOnDisk disk_state; ReplicationState *curstate = &replication_states[i]; @@ -718,7 +739,7 @@ StartupReplicationOrigin(void) already_started = true; #endif - if (max_replication_slots == 0) + if (max_replication_origins == 0) return; INIT_CRC32C(crc); @@ -728,8 +749,8 @@ StartupReplicationOrigin(void) fd = OpenTransientFile(path, O_RDONLY | PG_BINARY); /* - * might have had max_replication_slots == 0 last run, or we just brought - * up a standby. + * might have had max_replication_origins == 0 last run, or we just + * brought up a standby. */ if (fd < 0 && errno == ENOENT) return; @@ -796,10 +817,10 @@ StartupReplicationOrigin(void) COMP_CRC32C(crc, &disk_state, sizeof(disk_state)); - if (last_state == max_replication_slots) + if (last_state == max_replication_origins) ereport(PANIC, (errcode(ERRCODE_CONFIGURATION_LIMIT_EXCEEDED), - errmsg("could not find free replication state, increase \"max_replication_slots\""))); + errmsg("could not find free replication state, increase \"max_replication_origins\""))); /* copy data to shared memory */ replication_states[last_state].roident = disk_state.roident; @@ -852,7 +873,7 @@ replorigin_redo(XLogReaderState *record) xlrec = (xl_replorigin_drop *) XLogRecGetData(record); - for (i = 0; i < max_replication_slots; i++) + for (i = 0; i < max_replication_origins; i++) { ReplicationState *state = &replication_states[i]; @@ -917,7 +938,7 @@ replorigin_advance(RepOriginId node, * Search for either an existing slot for the origin, or a free one we can * use. */ - for (i = 0; i < max_replication_slots; i++) + for (i = 0; i < max_replication_origins; i++) { ReplicationState *curstate = &replication_states[i]; @@ -958,7 +979,7 @@ replorigin_advance(RepOriginId node, (errcode(ERRCODE_CONFIGURATION_LIMIT_EXCEEDED), errmsg("could not find free replication state slot for replication origin with ID %d", node), - errhint("Increase \"max_replication_slots\" and try again."))); + errhint("Increase \"max_replication_origins\" and try again."))); if (replication_state == NULL) { @@ -1024,7 +1045,7 @@ replorigin_get_progress(RepOriginId node, bool flush) /* prevent slots from being concurrently dropped */ LWLockAcquire(ReplicationOriginLock, LW_SHARED); - for (i = 0; i < max_replication_slots; i++) + for (i = 0; i < max_replication_origins; i++) { ReplicationState *state; @@ -1110,7 +1131,7 @@ replorigin_session_setup(RepOriginId node, int acquired_by) registered_cleanup = true; } - Assert(max_replication_slots > 0); + Assert(max_replication_origins > 0); if (session_replication_state != NULL) ereport(ERROR, @@ -1124,7 +1145,7 @@ replorigin_session_setup(RepOriginId node, int acquired_by) * Search for either an existing slot for the origin, or a free one we can * use. */ - for (i = 0; i < max_replication_slots; i++) + for (i = 0; i < max_replication_origins; i++) { ReplicationState *curstate = &replication_states[i]; @@ -1159,7 +1180,7 @@ replorigin_session_setup(RepOriginId node, int acquired_by) (errcode(ERRCODE_CONFIGURATION_LIMIT_EXCEEDED), errmsg("could not find free replication state slot for replication origin with ID %d", node), - errhint("Increase \"max_replication_slots\" and try again."))); + errhint("Increase \"max_replication_origins\" and try again."))); else if (session_replication_state == NULL) { /* initialize new slot */ @@ -1195,7 +1216,7 @@ replorigin_session_reset(void) { ConditionVariable *cv; - Assert(max_replication_slots != 0); + Assert(max_replication_origins != 0); if (session_replication_state == NULL) ereport(ERROR, @@ -1536,7 +1557,7 @@ pg_show_replication_origin_status(PG_FUNCTION_ARGS) * filled. Note that we do not take any locks, so slightly corrupted/out * of date values are a possibility. */ - for (i = 0; i < max_replication_slots; i++) + for (i = 0; i < max_replication_origins; i++) { ReplicationState *state; Datum values[REPLICATION_ORIGIN_PROGRESS_COLS]; diff --git a/src/backend/utils/misc/guc_tables.c b/src/backend/utils/misc/guc_tables.c index 71448bb4fdd..db0640e8bf4 100644 --- a/src/backend/utils/misc/guc_tables.c +++ b/src/backend/utils/misc/guc_tables.c @@ -3279,6 +3279,18 @@ struct config_int ConfigureNamesInt[] = NULL, NULL, NULL }, + { + {"max_replication_origins", + PGC_POSTMASTER, + REPLICATION_SUBSCRIBERS, + gettext_noop("Sets the maximum number of simultaneously configured replication origins."), + NULL + }, + &max_replication_origins, + -1, -1, MAX_BACKENDS, + NULL, NULL, NULL + }, + { {"log_rotation_age", PGC_SIGHUP, LOGGING_WHERE, gettext_noop("Sets the amount of time to wait before forcing " diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample index 079efa1baa7..249d3b50620 100644 --- a/src/backend/utils/misc/postgresql.conf.sample +++ b/src/backend/utils/misc/postgresql.conf.sample @@ -375,6 +375,9 @@ #max_logical_replication_workers = 4 # taken from max_worker_processes # (change requires restart) +#max_replication_origins = -1 # maximum number of configured replication origins + # -1 to use max_replication_slots + # (change requires restart) #max_sync_workers_per_subscription = 2 # taken from max_logical_replication_workers #max_parallel_apply_workers_per_subscription = 2 # taken from max_logical_replication_workers diff --git a/src/bin/pg_basebackup/pg_createsubscriber.c b/src/bin/pg_basebackup/pg_createsubscriber.c index faf18ccf131..4cbeeb5021d 100644 --- a/src/bin/pg_basebackup/pg_createsubscriber.c +++ b/src/bin/pg_basebackup/pg_createsubscriber.c @@ -964,7 +964,7 @@ check_subscriber(const struct LogicalRepInfo *dbinfo) bool failed = false; int max_lrworkers; - int max_repslots; + int max_reporigins; int max_wprocs; pg_log_info("checking settings on subscriber"); @@ -983,7 +983,7 @@ check_subscriber(const struct LogicalRepInfo *dbinfo) * Since these parameters are not a requirement for physical replication, * we should check it to make sure it won't fail. * - * - max_replication_slots >= number of dbs to be converted + * - max_replication_origins >= number of dbs to be converted * - max_logical_replication_workers >= number of dbs to be converted * - max_worker_processes >= 1 + number of dbs to be converted *------------------------------------------------------------------------ @@ -991,7 +991,7 @@ check_subscriber(const struct LogicalRepInfo *dbinfo) res = PQexec(conn, "SELECT setting FROM pg_catalog.pg_settings WHERE name IN (" "'max_logical_replication_workers', " - "'max_replication_slots', " + "'max_replication_origins', " "'max_worker_processes', " "'primary_slot_name') " "ORDER BY name"); @@ -1004,14 +1004,14 @@ check_subscriber(const struct LogicalRepInfo *dbinfo) } max_lrworkers = atoi(PQgetvalue(res, 0, 0)); - max_repslots = atoi(PQgetvalue(res, 1, 0)); + max_reporigins = atoi(PQgetvalue(res, 1, 0)); max_wprocs = atoi(PQgetvalue(res, 2, 0)); if (strcmp(PQgetvalue(res, 3, 0), "") != 0) primary_slot_name = pg_strdup(PQgetvalue(res, 3, 0)); pg_log_debug("subscriber: max_logical_replication_workers: %d", max_lrworkers); - pg_log_debug("subscriber: max_replication_slots: %d", max_repslots); + pg_log_debug("subscriber: max_replication_origins: %d", max_reporigins); pg_log_debug("subscriber: max_worker_processes: %d", max_wprocs); if (primary_slot_name) pg_log_debug("subscriber: primary_slot_name: %s", primary_slot_name); @@ -1020,12 +1020,12 @@ check_subscriber(const struct LogicalRepInfo *dbinfo) disconnect_database(conn, false); - if (max_repslots < num_dbs) + if (max_reporigins < num_dbs) { - pg_log_error("subscriber requires %d replication slots, but only %d remain", - num_dbs, max_repslots); + pg_log_error("subscriber requires %d replication origins, but only %d remain", + num_dbs, max_reporigins); pg_log_error_hint("Increase the configuration parameter \"%s\" to at least %d.", - "max_replication_slots", num_dbs); + "max_replication_origins", num_dbs); failed = true; } diff --git a/src/bin/pg_basebackup/t/040_pg_createsubscriber.pl b/src/bin/pg_basebackup/t/040_pg_createsubscriber.pl index c8dbdb7e9b7..98942d226c0 100644 --- a/src/bin/pg_basebackup/t/040_pg_createsubscriber.pl +++ b/src/bin/pg_basebackup/t/040_pg_createsubscriber.pl @@ -274,7 +274,7 @@ max_worker_processes = 8 # Check some unmet conditions on node S $node_s->append_conf( 'postgresql.conf', q{ -max_replication_slots = 1 +max_replication_origins = 1 max_logical_replication_workers = 1 max_worker_processes = 2 }); @@ -293,7 +293,7 @@ command_fails( 'standby contains unmet conditions on node S'); $node_s->append_conf( 'postgresql.conf', q{ -max_replication_slots = 10 +max_replication_origins = 10 max_logical_replication_workers = 4 max_worker_processes = 8 }); diff --git a/src/bin/pg_upgrade/check.c b/src/bin/pg_upgrade/check.c index 7ca1d8fffc9..51d9f7408d8 100644 --- a/src/bin/pg_upgrade/check.c +++ b/src/bin/pg_upgrade/check.c @@ -1815,7 +1815,7 @@ check_new_cluster_logical_replication_slots(void) /* * check_new_cluster_subscription_configuration() * - * Verify that the max_replication_slots configuration specified is enough for + * Verify that the max_replication_origins configuration specified is enough for * creating the subscriptions. This is required to create the replication * origin for each subscription. */ @@ -1824,7 +1824,7 @@ check_new_cluster_subscription_configuration(void) { PGresult *res; PGconn *conn; - int max_replication_slots; + int max_replication_origins; /* Subscriptions and their dependencies can be migrated since PG17. */ if (GET_MAJOR_VERSION(old_cluster.major_version) < 1700) @@ -1839,16 +1839,16 @@ check_new_cluster_subscription_configuration(void) conn = connectToServer(&new_cluster, "template1"); res = executeQueryOrDie(conn, "SELECT setting FROM pg_settings " - "WHERE name = 'max_replication_slots';"); + "WHERE name = 'max_replication_origins';"); if (PQntuples(res) != 1) pg_fatal("could not determine parameter settings on new cluster"); - max_replication_slots = atoi(PQgetvalue(res, 0, 0)); - if (old_cluster.nsubs > max_replication_slots) - pg_fatal("\"max_replication_slots\" (%d) must be greater than or equal to the number of " + max_replication_origins = atoi(PQgetvalue(res, 0, 0)); + if (old_cluster.nsubs > max_replication_origins) + pg_fatal("\"max_replication_origins\" (%d) must be greater than or equal to the number of " "subscriptions (%d) on the old cluster", - max_replication_slots, old_cluster.nsubs); + max_replication_origins, old_cluster.nsubs); PQclear(res); PQfinish(conn); diff --git a/src/bin/pg_upgrade/t/004_subscription.pl b/src/bin/pg_upgrade/t/004_subscription.pl index 13773316e1d..c69ad5da8fa 100644 --- a/src/bin/pg_upgrade/t/004_subscription.pl +++ b/src/bin/pg_upgrade/t/004_subscription.pl @@ -41,7 +41,7 @@ chdir ${PostgreSQL::Test::Utils::tmp_check}; my $connstr = $publisher->connstr . ' dbname=postgres'; # ------------------------------------------------------ -# Check that pg_upgrade fails when max_replication_slots configured in the new +# Check that pg_upgrade fails when max_replication_origins configured in the new # cluster is less than the number of subscriptions in the old cluster. # ------------------------------------------------------ # It is sufficient to use disabled subscription to test upgrade failure. @@ -52,10 +52,10 @@ $old_sub->safe_psql('postgres', $old_sub->stop; -$new_sub->append_conf('postgresql.conf', "max_replication_slots = 0"); +$new_sub->append_conf('postgresql.conf', "max_replication_origins = 0"); # pg_upgrade will fail because the new cluster has insufficient -# max_replication_slots. +# max_replication_origins. command_checks_all( [ 'pg_upgrade', @@ -72,14 +72,14 @@ command_checks_all( ], 1, [ - qr/"max_replication_slots" \(0\) must be greater than or equal to the number of subscriptions \(1\) on the old cluster/ + qr/"max_replication_origins" \(0\) must be greater than or equal to the number of subscriptions \(1\) on the old cluster/ ], [qr//], - 'run of pg_upgrade where the new cluster has insufficient max_replication_slots' + 'run of pg_upgrade where the new cluster has insufficient max_replication_origins' ); -# Reset max_replication_slots -$new_sub->append_conf('postgresql.conf', "max_replication_slots = 10"); +# Reset max_replication_origins +$new_sub->append_conf('postgresql.conf', "max_replication_origins = 10"); # Cleanup $publisher->safe_psql('postgres', "DROP PUBLICATION regress_pub1"); diff --git a/src/include/replication/origin.h b/src/include/replication/origin.h index 33a7e59ddb0..8f65d4ef301 100644 --- a/src/include/replication/origin.h +++ b/src/include/replication/origin.h @@ -37,6 +37,9 @@ extern PGDLLIMPORT RepOriginId replorigin_session_origin; extern PGDLLIMPORT XLogRecPtr replorigin_session_origin_lsn; extern PGDLLIMPORT TimestampTz replorigin_session_origin_timestamp; +/* GUCs */ +extern PGDLLIMPORT int max_replication_origins; + /* API for querying & manipulating replication origins */ extern RepOriginId replorigin_by_name(const char *roname, bool missing_ok); extern RepOriginId replorigin_create(const char *roname); -- 2.39.5