Hello,
12.08.2024 14:59, David Rowley wrote:
(I know Daniel mentioned he'd get to these, but the ScanDirection one
was my fault and I needed to clear that off my mind. I did a few
others while on this topic.)
Thank you, David, for working on that!
I've gathered another bunch of defects with the possible substitutions.
Please take a look:
adapated -> adapted
becasue -> because
cancelled -> canceled (introduced by 90f517821, but see 8c9da1441)
cange -> change
comand -> command
CommitTSSLRU -> CommitTsSLRU (introduced by 53c2a97a9; maybe the fix
should be back-patched...)
connectOptions2 -> pqConnectOptions2 (see 774bcffe4)
Injections points -> Injection points
jsetate -> jsestate
LockShmemSize -> remove the sentence? (added by ec0baf949, outdated with
a794fb068)
MaybeStartSlotSyncWorker -> LaunchMissingBackgroundProcesses (the logic to
start B_SLOTSYNC_WORKER moved from the former to the latter function with
3354f8528)
multixact_member_buffer -> multixact_member_buffers
per_data_data -> per_buffer_data (see code below the comment; introduced by
b5a9b18cd)
per_buffer_private -> remove the function declaration? (the duplicate
declaration was added by a858be17c)
performancewise -> performance-wise? (coined by a7f107df2)
pgstat_add_kind -> pgstat_register_kind (see 7949d9594)
pg_signal_autovacuum -> pg_signal_autovacuum_worker (see d2b74882c)
recoveery -> recovery
RegisteredWorker -> RegisteredBgWorker
RUNNING_XACT -> RUNNING_XACTS
sanpshot -> snapshot
TypeEntry -> TypeCacheEntry (align with AttoptCacheEntry, from the same
commit 40064a8ee)
The corresponding patch is attached for your convenience.
Best regards,
Alexander
diff --git a/contrib/test_decoding/specs/skip_snapshot_restore.spec b/contrib/test_decoding/specs/skip_snapshot_restore.spec
index 3f1fb6f02c7..7b35dbcc9f3 100644
--- a/contrib/test_decoding/specs/skip_snapshot_restore.spec
+++ b/contrib/test_decoding/specs/skip_snapshot_restore.spec
@@ -39,7 +39,7 @@ step "s2_get_changes_slot0" { SELECT data FROM pg_logical_slot_get_changes('slot
# serializes consistent snapshots to the disk at LSNs where are before
# s0-transaction's commit. After s0-transaction commits, "s1_init" resumes but
# must not restore any serialized snapshots and will reach the consistent state
-# when decoding a RUNNING_XACT record generated after s0-transaction's commit.
+# when decoding a RUNNING_XACTS record generated after s0-transaction's commit.
# We check if the get_changes on 'slot1' will not return any s0-transaction's
# changes as its confirmed_flush_lsn will be after the s0-transaction's commit
# record.
diff --git a/doc/src/sgml/xfunc.sgml b/doc/src/sgml/xfunc.sgml
index 9bc23a9a938..af7864a1b5b 100644
--- a/doc/src/sgml/xfunc.sgml
+++ b/doc/src/sgml/xfunc.sgml
@@ -3891,8 +3891,8 @@ static const PgStat_KindInfo custom_stats = {
it with <literal>pgstat_register_kind</literal> and a unique ID used to
store the entries related to this type of statistics:
<programlisting>
-extern PgStat_Kind pgstat_add_kind(PgStat_Kind kind,
- const PgStat_KindInfo *kind_info);
+extern PgStat_Kind pgstat_register_kind(PgStat_Kind kind,
+ const PgStat_KindInfo *kind_info);
</programlisting>
While developing a new extension, use
<literal>PGSTAT_KIND_EXPERIMENTAL</literal> for
diff --git a/src/backend/access/transam/multixact.c b/src/backend/access/transam/multixact.c
index a03d56541d0..8c37d7eba76 100644
--- a/src/backend/access/transam/multixact.c
+++ b/src/backend/access/transam/multixact.c
@@ -2017,7 +2017,7 @@ check_multixact_offset_buffers(int *newval, void **extra, GucSource source)
}
/*
- * GUC check_hook for multixact_member_buffer
+ * GUC check_hook for multixact_member_buffers
*/
bool
check_multixact_member_buffers(int *newval, void **extra, GucSource source)
diff --git a/src/backend/commands/matview.c b/src/backend/commands/matview.c
index 91f0fd6ea3e..b2457f121a7 100644
--- a/src/backend/commands/matview.c
+++ b/src/backend/commands/matview.c
@@ -382,7 +382,7 @@ RefreshMatViewByOid(Oid matviewOid, bool is_create, bool skipData,
* command tag is left false in cmdtaglist.h. Otherwise, the change of
* completion tag output might break applications using it.
*
- * When called from CREATE MATERIALIZED VIEW comand, the rowcount is
+ * When called from CREATE MATERIALIZED VIEW command, the rowcount is
* displayed with the command tag CMDTAG_SELECT.
*/
if (qc)
diff --git a/src/backend/commands/waitlsn.c b/src/backend/commands/waitlsn.c
index d9cf9e7d75e..d7065726749 100644
--- a/src/backend/commands/waitlsn.c
+++ b/src/backend/commands/waitlsn.c
@@ -369,7 +369,7 @@ pg_wal_replay_wait(PG_FUNCTION_ARGS)
*/
InvalidateCatalogSnapshot();
- /* Give up if there is still an active or registered sanpshot. */
+ /* Give up if there is still an active or registered snapshot. */
if (GetOldestSnapshot())
ereport(ERROR,
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
diff --git a/src/backend/executor/execExprInterp.c b/src/backend/executor/execExprInterp.c
index 77394e76c37..a6c47f61e0d 100644
--- a/src/backend/executor/execExprInterp.c
+++ b/src/backend/executor/execExprInterp.c
@@ -1101,7 +1101,7 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull)
EEO_CASE(EEOP_PARAM_SET)
{
- /* out of line, unlikely to matter performancewise */
+ /* out of line, unlikely to matter performance-wise */
ExecEvalParamSet(state, op, econtext);
EEO_NEXT();
}
@@ -4762,7 +4762,7 @@ ExecEvalJsonCoercionFinish(ExprState *state, ExprEvalStep *op)
if (SOFT_ERROR_OCCURRED(&jsestate->escontext))
{
/*
- * jsestate->error or jsetate->empty being set means that the error
+ * jsestate->error or jsestate->empty being set means that the error
* occurred when coercing the JsonBehavior value. Throw the error in
* that case with the actual coercion error message shown in the
* DETAIL part.
diff --git a/src/backend/postmaster/postmaster.c b/src/backend/postmaster/postmaster.c
index a6fff93db34..59da773ee1f 100644
--- a/src/backend/postmaster/postmaster.c
+++ b/src/backend/postmaster/postmaster.c
@@ -2628,7 +2628,7 @@ CleanupBackend(Backend *bp,
BackgroundWorkerStopNotifications(bp->pid);
/*
- * If it was a background worker, also update its RegisteredWorker entry.
+ * If it was a background worker, also update its RegisteredBgWorker entry.
*/
if (bp->bkend_type == BACKEND_TYPE_BGWORKER)
{
diff --git a/src/backend/replication/logical/slotsync.c b/src/backend/replication/logical/slotsync.c
index 51072297fd3..33378bacea6 100644
--- a/src/backend/replication/logical/slotsync.c
+++ b/src/backend/replication/logical/slotsync.c
@@ -83,7 +83,7 @@
* this flag is set. Note that we don't need to reset this variable as after
* promotion the slot sync worker won't be restarted because the pmState
* changes to PM_RUN from PM_HOT_STANDBY and we don't support demoting
- * primary without restarting the server. See MaybeStartSlotSyncWorker.
+ * primary without restarting the server. See LaunchMissingBackgroundProcesses.
*
* The 'syncing' flag is needed to prevent concurrent slot syncs to avoid slot
* overwrites.
diff --git a/src/backend/storage/aio/read_stream.c b/src/backend/storage/aio/read_stream.c
index 93cdd35fea0..064861e5fb7 100644
--- a/src/backend/storage/aio/read_stream.c
+++ b/src/backend/storage/aio/read_stream.c
@@ -449,7 +449,7 @@ read_stream_begin_impl(int flags,
queue_size = max_pinned_buffers + 1;
/*
- * Allocate the object, the buffers, the ios and per_data_data space in
+ * Allocate the object, the buffers, the ios and per_buffer_data space in
* one big chunk. Though we have queue_size buffers, we want to be able
* to assume that all the buffers for a single read are contiguous (i.e.
* don't wrap around halfway through), so we allow temporary overflows of
diff --git a/src/backend/storage/lmgr/lock.c b/src/backend/storage/lmgr/lock.c
index 6dbc41dae70..c239888eec6 100644
--- a/src/backend/storage/lmgr/lock.c
+++ b/src/backend/storage/lmgr/lock.c
@@ -395,8 +395,7 @@ LockManagerShmemInit(void)
bool found;
/*
- * Compute init/max size to request for lock hashtables. Note these
- * calculations must agree with LockShmemSize!
+ * Compute init/max size to request for lock hashtables.
*/
max_table_size = NLOCKENTS();
init_table_size = max_table_size / 2;
diff --git a/src/backend/storage/lmgr/lwlock.c b/src/backend/storage/lmgr/lwlock.c
index e765754d805..db6ed784ab3 100644
--- a/src/backend/storage/lmgr/lwlock.c
+++ b/src/backend/storage/lmgr/lwlock.c
@@ -158,7 +158,7 @@ static const char *const BuiltinTrancheNames[] = {
[LWTRANCHE_LAUNCHER_HASH] = "LogicalRepLauncherHash",
[LWTRANCHE_DSM_REGISTRY_DSA] = "DSMRegistryDSA",
[LWTRANCHE_DSM_REGISTRY_HASH] = "DSMRegistryHash",
- [LWTRANCHE_COMMITTS_SLRU] = "CommitTSSLRU",
+ [LWTRANCHE_COMMITTS_SLRU] = "CommitTsSLRU",
[LWTRANCHE_MULTIXACTOFFSET_SLRU] = "MultixactOffsetSLRU",
[LWTRANCHE_MULTIXACTMEMBER_SLRU] = "MultixactMemberSLRU",
[LWTRANCHE_NOTIFY_SLRU] = "NotifySLRU",
diff --git a/src/backend/utils/cache/typcache.c b/src/backend/utils/cache/typcache.c
index 0b9e60845b2..2ec136b7d30 100644
--- a/src/backend/utils/cache/typcache.c
+++ b/src/backend/utils/cache/typcache.c
@@ -367,9 +367,9 @@ lookup_type_cache(Oid type_id, int flags)
ctl.entrysize = sizeof(TypeCacheEntry);
/*
- * TypeEntry takes hash value from the system cache. For TypeCacheHash
- * we use the same hash in order to speedup search by hash value. This
- * is used by hash_seq_init_with_hash_value().
+ * TypeCacheEntry takes hash value from the system cache. For
+ * TypeCacheHash we use the same hash in order to speedup search by
+ * hash value. This is used by hash_seq_init_with_hash_value().
*/
ctl.hash = type_cache_syshash;
diff --git a/src/bin/pg_combinebackup/t/008_promote.pl b/src/bin/pg_combinebackup/t/008_promote.pl
index 1154a5d8b22..0ee96ff037c 100644
--- a/src/bin/pg_combinebackup/t/008_promote.pl
+++ b/src/bin/pg_combinebackup/t/008_promote.pl
@@ -54,7 +54,7 @@ recovery_target_action = 'pause'
EOM
$node2->start();
-# Wait until recoveery pauses, then promote.
+# Wait until recovery pauses, then promote.
$node2->poll_query_until('postgres', "SELECT pg_get_wal_replay_pause_state() = 'paused';");
$node2->safe_psql('postgres', "SELECT pg_promote()");
@@ -65,7 +65,7 @@ INSERT INTO mytable VALUES (2, 'blackberry');
EOM
# Now take an incremental backup. If WAL summarization didn't follow the
-# timeline cange correctly, something should break at this point.
+# timeline change correctly, something should break at this point.
my $backup2path = $node1->backup_dir . '/backup2';
$node2->command_ok(
[ 'pg_basebackup', '-D', $backup2path, '--no-sync', '-cfast',
diff --git a/src/bin/psql/common.c b/src/bin/psql/common.c
index be265aa05a4..066dccbd841 100644
--- a/src/bin/psql/common.c
+++ b/src/bin/psql/common.c
@@ -1715,7 +1715,7 @@ ExecQueryAndProcessResults(const char *query,
{
/*
* Display the current chunk of results, unless the output
- * stream stopped working or we got cancelled. We skip use of
+ * stream stopped working or we got canceled. We skip use of
* PrintQueryResult and go directly to printQuery, so that we
* can pass the correct is_pager value and because we don't
* want PrintQueryStatus to happen yet. Above, we rejected
diff --git a/src/fe_utils/astreamer_gzip.c b/src/fe_utils/astreamer_gzip.c
index 0d12b9bce7a..ca5be6423a1 100644
--- a/src/fe_utils/astreamer_gzip.c
+++ b/src/fe_utils/astreamer_gzip.c
@@ -13,7 +13,7 @@
* taken here is less flexible, because a writer can only write to a file,
* while a compressor can write to a subsequent astreamer which is free
* to do whatever it likes. The reason it's like this is because this
- * code was adapated from old, less-modular pg_basebackup code that used
+ * code was adapted from old, less-modular pg_basebackup code that used
* the same APIs that astreamer_gzip_writer now uses, and it didn't seem
* necessary to change anything at the time.
*
diff --git a/src/include/storage/read_stream.h b/src/include/storage/read_stream.h
index 4e599904f26..42a623bfc54 100644
--- a/src/include/storage/read_stream.h
+++ b/src/include/storage/read_stream.h
@@ -66,7 +66,6 @@ extern ReadStream *read_stream_begin_smgr_relation(int flags,
ReadStreamBlockNumberCB callback,
void *callback_private_data,
size_t per_buffer_data_size);
-extern Buffer read_stream_next_buffer(ReadStream *stream, void **per_buffer_private);
extern void read_stream_reset(ReadStream *stream);
extern void read_stream_end(ReadStream *stream);
diff --git a/src/include/utils/injection_point.h b/src/include/utils/injection_point.h
index b4fc677c9b4..b1f06b25998 100644
--- a/src/include/utils/injection_point.h
+++ b/src/include/utils/injection_point.h
@@ -12,7 +12,7 @@
#define INJECTION_POINT_H
/*
- * Injections points require --enable-injection-points.
+ * Injection points require --enable-injection-points.
*/
#ifdef USE_INJECTION_POINTS
#define INJECTION_POINT_LOAD(name) InjectionPointLoad(name)
diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 3fa2dd864fe..9febdaa2885 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -948,7 +948,7 @@ fillPGconn(PGconn *conn, PQconninfoOption *connOptions)
* Copy over option values from srcConn to dstConn
*
* Don't put anything cute here --- intelligence should be in
- * connectOptions2 ...
+ * pqConnectOptions2 ...
*
* Returns true on success. On failure, returns false and sets error message of
* dstConn.
diff --git a/src/test/modules/test_misc/t/006_signal_autovacuum.pl b/src/test/modules/test_misc/t/006_signal_autovacuum.pl
index 51bdefe24aa..929253f7542 100644
--- a/src/test/modules/test_misc/t/006_signal_autovacuum.pl
+++ b/src/test/modules/test_misc/t/006_signal_autovacuum.pl
@@ -67,7 +67,7 @@ like(
my $offset = -s $node->logfile;
-# Role with pg_signal_autovacuum can terminate autovacuum worker.
+# Role with pg_signal_autovacuum_worker can terminate autovacuum worker.
my $terminate_with_pg_signal_av = $node->psql(
'postgres', qq(
SET ROLE regress_worker_role;
diff --git a/src/test/subscription/t/021_twophase.pl b/src/test/subscription/t/021_twophase.pl
index 19147f31e21..98fe59ac5a4 100644
--- a/src/test/subscription/t/021_twophase.pl
+++ b/src/test/subscription/t/021_twophase.pl
@@ -76,7 +76,7 @@ $node_publisher->safe_psql(
INSERT INTO tab_full VALUES (11);
PREPARE TRANSACTION 'test_prepared_tab_full';");
-# Confirm the ERROR is reported becasue max_prepared_transactions is zero
+# Confirm the ERROR is reported because max_prepared_transactions is zero
$node_subscriber->wait_for_log(
qr/ERROR: ( [A-Z0-9]+:)? prepared transactions are disabled/);