Shubham Khanna <khannashubham1...@gmail.com> writes: > I was reviewing the Patch and came across a minor issue that the Patch > does not apply on the current Head. Please provide the updated version > of the patch.
Thanks for the heads-up. Commit 5ccb3bb13dcbedc30d015fc06d306d5106701e16 removed one of the instances of "data struture" fixed by the patch. Rebased patch set attached. I also squashed the check_decls.m4 change into the main comment typos commit. > Also, I found one typo: > 0008-ecpg-fix-typo-in-get_dtype-return-value-for-ECPGd_co.patch All > the other enum values return a string mathing the enum label, but this > has had a trailing r since the function was added in commit > 339a5bbfb17ecd171ebe076c5bf016c4e66e2c0a > > Here 'mathing' should be 'matching'. Thanks. I've fixed the commit message (and elaborated it a bit more why I think it's a valid and safe fix). > Thanks and Regards, > Shubham Khanna. - ilmari
>From 5ccb3bb13dcbedc30d015fc06d306d5106701e16 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Dagfinn=20Ilmari=20Manns=C3=A5ker?= <ilm...@ilmari.org> Date: Fri, 22 Dec 2023 01:46:27 +0000 Subject: [PATCH v2 01/12] Fix typos in comments --- config/check_decls.m4 | 2 +- contrib/bloom/bloom.h | 2 +- contrib/pgcrypto/expected/pgp-compression.out | 2 +- contrib/pgcrypto/openssl.c | 2 +- contrib/pgcrypto/pgp-encrypt.c | 2 +- contrib/pgcrypto/sql/pgp-compression.sql | 2 +- contrib/postgres_fdw/expected/postgres_fdw.out | 2 +- contrib/postgres_fdw/sql/postgres_fdw.sql | 2 +- src/backend/access/brin/brin.c | 6 +++--- src/backend/access/common/heaptuple.c | 2 +- src/backend/access/gist/gistbuild.c | 2 +- src/backend/access/heap/heapam.c | 4 ++-- src/backend/access/nbtree/nbtree.c | 2 +- src/backend/catalog/namespace.c | 2 +- src/backend/catalog/pg_constraint.c | 2 +- src/backend/commands/event_trigger.c | 2 +- src/backend/executor/execMain.c | 2 +- src/backend/optimizer/plan/initsplan.c | 4 ++-- src/backend/utils/adt/rangetypes.c | 2 +- src/backend/utils/cache/catcache.c | 2 +- src/backend/utils/sort/tuplesortvariants.c | 10 +++++----- src/backend/utils/time/combocid.c | 2 +- src/bin/pg_rewind/t/001_basic.pl | 2 +- src/bin/pg_rewind/t/004_pg_xlog_symlink.pl | 2 +- src/bin/pg_rewind/t/007_standby_source.pl | 2 +- src/bin/pg_rewind/t/009_growing_files.pl | 2 +- src/include/pg_config_manual.h | 2 +- src/test/isolation/specs/stats.spec | 16 ++++++++-------- src/test/recovery/t/029_stats_restart.pl | 2 +- src/test/regress/expected/boolean.out | 2 +- src/test/regress/expected/brin_multi.out | 4 ++-- src/test/regress/expected/join.out | 2 +- src/test/regress/sql/boolean.sql | 2 +- src/test/regress/sql/brin_multi.sql | 4 ++-- src/test/regress/sql/join.sql | 2 +- 35 files changed, 52 insertions(+), 52 deletions(-) diff --git a/config/check_decls.m4 b/config/check_decls.m4 index f1b90c5430..2dfcfe13fb 100644 --- a/config/check_decls.m4 +++ b/config/check_decls.m4 @@ -31,7 +31,7 @@ # respectively. If not, see <http://www.gnu.org/licenses/>. # Written by David MacKenzie, with help from -# Franc,ois Pinard, Karl Berry, Richard Pixley, Ian Lance Taylor, +# François Pinard, Karl Berry, Richard Pixley, Ian Lance Taylor, # Roland McGrath, Noah Friedman, david d zuhn, and many others. diff --git a/contrib/bloom/bloom.h b/contrib/bloom/bloom.h index 330811ec60..7c4407b9ec 100644 --- a/contrib/bloom/bloom.h +++ b/contrib/bloom/bloom.h @@ -127,7 +127,7 @@ typedef struct BloomMetaPageData FreeBlockNumberArray notFullPage; } BloomMetaPageData; -/* Magic number to distinguish bloom pages among anothers */ +/* Magic number to distinguish bloom pages from others */ #define BLOOM_MAGICK_NUMBER (0xDBAC0DED) /* Number of blocks numbers fit in BloomMetaPageData */ diff --git a/contrib/pgcrypto/expected/pgp-compression.out b/contrib/pgcrypto/expected/pgp-compression.out index d4c57feba3..67e2dce897 100644 --- a/contrib/pgcrypto/expected/pgp-compression.out +++ b/contrib/pgcrypto/expected/pgp-compression.out @@ -60,7 +60,7 @@ WITH random_string AS -- This generates a random string of 16366 bytes. This is chosen -- as random so that it does not get compressed, and the decompression -- would work on a string with the same length as the origin, making the - -- test behavior more predictible. lpad() ensures that the generated + -- test behavior more predictable. lpad() ensures that the generated -- hexadecimal value is completed by extra zero characters if random() -- has generated a value strictly lower than 16. SELECT string_agg(decode(lpad(to_hex((random()*256)::int), 2, '0'), 'hex'), '') as bytes diff --git a/contrib/pgcrypto/openssl.c b/contrib/pgcrypto/openssl.c index 4a913bd04f..8259de5e39 100644 --- a/contrib/pgcrypto/openssl.c +++ b/contrib/pgcrypto/openssl.c @@ -460,7 +460,7 @@ bf_init(PX_Cipher *c, const uint8 *key, unsigned klen, const uint8 *iv) /* * Test if key len is supported. BF_set_key silently cut large keys and it - * could be a problem when user transfer crypted data from one server to + * could be a problem when user transfer encrypted data from one server to * another. */ diff --git a/contrib/pgcrypto/pgp-encrypt.c b/contrib/pgcrypto/pgp-encrypt.c index f7467c9b1c..24fdbd0524 100644 --- a/contrib/pgcrypto/pgp-encrypt.c +++ b/contrib/pgcrypto/pgp-encrypt.c @@ -645,7 +645,7 @@ pgp_encrypt(PGP_Context *ctx, MBuf *src, MBuf *dst) goto out; pf = pf_tmp; - /* encrypter */ + /* encryptor */ res = pushf_create(&pf_tmp, &encrypt_filter, ctx, pf); if (res < 0) goto out; diff --git a/contrib/pgcrypto/sql/pgp-compression.sql b/contrib/pgcrypto/sql/pgp-compression.sql index 87c59c6cab..82080e4389 100644 --- a/contrib/pgcrypto/sql/pgp-compression.sql +++ b/contrib/pgcrypto/sql/pgp-compression.sql @@ -36,7 +36,7 @@ WITH random_string AS -- This generates a random string of 16366 bytes. This is chosen -- as random so that it does not get compressed, and the decompression -- would work on a string with the same length as the origin, making the - -- test behavior more predictible. lpad() ensures that the generated + -- test behavior more predictable. lpad() ensures that the generated -- hexadecimal value is completed by extra zero characters if random() -- has generated a value strictly lower than 16. SELECT string_agg(decode(lpad(to_hex((random()*256)::int), 2, '0'), 'hex'), '') as bytes diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out index c988745b92..d83f6ae8cb 100644 --- a/contrib/postgres_fdw/expected/postgres_fdw.out +++ b/contrib/postgres_fdw/expected/postgres_fdw.out @@ -4819,7 +4819,7 @@ SELECT * FROM ft2 ftupper WHERE 925 | 5 | 00925 | Mon Jan 26 00:00:00 1970 PST | Mon Jan 26 00:00:00 1970 | 5 | 5 | foo (10 rows) --- EXISTS should be propogated to the highest upper inner join +-- EXISTS should be propagated to the highest upper inner join EXPLAIN (verbose, costs off) SELECT ft2.*, ft4.* FROM ft2 INNER JOIN (SELECT * FROM ft4 WHERE EXISTS ( diff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql index cb40540702..90c8fa4b70 100644 --- a/contrib/postgres_fdw/sql/postgres_fdw.sql +++ b/contrib/postgres_fdw/sql/postgres_fdw.sql @@ -1399,7 +1399,7 @@ SELECT * FROM ft2 ftupper WHERE AND ftupper.c1 > 900 ORDER BY ftupper.c1 LIMIT 10; --- EXISTS should be propogated to the highest upper inner join +-- EXISTS should be propagated to the highest upper inner join EXPLAIN (verbose, costs off) SELECT ft2.*, ft4.* FROM ft2 INNER JOIN (SELECT * FROM ft4 WHERE EXISTS ( diff --git a/src/backend/access/brin/brin.c b/src/backend/access/brin/brin.c index dfa34f49a4..6f1f551897 100644 --- a/src/backend/access/brin/brin.c +++ b/src/backend/access/brin/brin.c @@ -348,7 +348,7 @@ brininsert(Relation idxRel, Datum *values, bool *nulls, bool autosummarize = BrinGetAutoSummarize(idxRel); /* - * If firt time through in this statement, initialize the insert state + * If first time through in this statement, initialize the insert state * that we keep for all the inserts in the command. */ if (!bistate) @@ -1042,7 +1042,7 @@ brinbuildCallbackParallel(Relation index, /* * If we're in a block that belongs to a different range, summarize what * we've got and start afresh. Note the scan might have skipped many - * pages, if they were devoid of live tuples; we do not create emptry BRIN + * pages, if they were devoid of live tuples; we do not create empty BRIN * ranges here - the leader is responsible for filling them in. * * Unlike serial builds, parallel index builds allow synchronized seqscans @@ -2149,7 +2149,7 @@ union_tuples(BrinDesc *bdesc, BrinMemTuple *a, BrinTuple *b) * brin_vacuum_scan * Do a complete scan of the index during VACUUM. * - * This routine scans the complete index looking for uncatalogued index pages, + * This routine scans the complete index looking for uncataloged index pages, * i.e. those that might have been lost due to a crash after index extension * and such. */ diff --git a/src/backend/access/common/heaptuple.c b/src/backend/access/common/heaptuple.c index c52d40dce0..88fb9e3445 100644 --- a/src/backend/access/common/heaptuple.c +++ b/src/backend/access/common/heaptuple.c @@ -85,7 +85,7 @@ ((att)->attstorage != TYPSTORAGE_PLAIN) /* - * Setup for cacheing pass-by-ref missing attributes in a way that survives + * Setup for caching pass-by-ref missing attributes in a way that survives * tupleDesc destruction. */ diff --git a/src/backend/access/gist/gistbuild.c b/src/backend/access/gist/gistbuild.c index a45e2fe375..cacf50b269 100644 --- a/src/backend/access/gist/gistbuild.c +++ b/src/backend/access/gist/gistbuild.c @@ -121,7 +121,7 @@ typedef struct * * Sorting GiST build requires good linearization of the sort opclass. This is * not always the case in multidimensional data. To tackle the anomalies, we - * buffer index tuples and apply picksplit that can be multidimension-aware. + * buffer index tuples and apply picksplit that can be multidimensional-aware. */ typedef struct GistSortedBuildLevelState { diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c index f938715359..c83dec41a0 100644 --- a/src/backend/access/heap/heapam.c +++ b/src/backend/access/heap/heapam.c @@ -1986,7 +1986,7 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid, ReleaseBuffer(vmbuffer); /* - * If tuple is cachable, mark it for invalidation from the caches in case + * If tuple is cacheable, mark it for invalidation from the caches in case * we abort. Note it is OK to do this after releasing the buffer, because * the heaptup data structure is all in local memory, not in the shared * buffer. @@ -2428,7 +2428,7 @@ heap_multi_insert(Relation relation, TupleTableSlot **slots, int ntuples, CheckForSerializableConflictIn(relation, NULL, InvalidBlockNumber); /* - * If tuples are cachable, mark them for invalidation from the caches in + * If tuples are cacheable, mark them for invalidation from the caches in * case we abort. Note it is OK to do this after releasing the buffer, * because the heaptuples data structure is all in local memory, not in * the shared buffer. diff --git a/src/backend/access/nbtree/nbtree.c b/src/backend/access/nbtree/nbtree.c index dd6dc0971b..c95b52eac4 100644 --- a/src/backend/access/nbtree/nbtree.c +++ b/src/backend/access/nbtree/nbtree.c @@ -158,7 +158,7 @@ btbuildempty(Relation index) Page metapage; /* - * Initalize the metapage. + * Initialize the metapage. * * Regular index build bypasses the buffer manager and uses smgr functions * directly, with an smgrimmedsync() call at the end. That makes sense diff --git a/src/backend/catalog/namespace.c b/src/backend/catalog/namespace.c index 37a69e9023..3f777693ae 100644 --- a/src/backend/catalog/namespace.c +++ b/src/backend/catalog/namespace.c @@ -4218,7 +4218,7 @@ cachedNamespacePath(const char *searchPath, Oid roleid) entry = spcache_insert(searchPath, roleid); /* - * An OOM may have resulted in a cache entry with mising 'oidlist' or + * An OOM may have resulted in a cache entry with missing 'oidlist' or * 'finalPath', so just compute whatever is missing. */ diff --git a/src/backend/catalog/pg_constraint.c b/src/backend/catalog/pg_constraint.c index e9d4d6006e..b0730c99af 100644 --- a/src/backend/catalog/pg_constraint.c +++ b/src/backend/catalog/pg_constraint.c @@ -1290,7 +1290,7 @@ get_relation_constraint_attnos(Oid relid, const char *conname, /* * Return the OID of the constraint enforced by the given index in the - * given relation; or InvalidOid if no such index is catalogued. + * given relation; or InvalidOid if no such index is cataloged. * * Much like get_constraint_index, this function is concerned only with the * one constraint that "owns" the given index. Therefore, constraints of diff --git a/src/backend/commands/event_trigger.c b/src/backend/commands/event_trigger.c index bf47b0f6e2..35d5508f4a 100644 --- a/src/backend/commands/event_trigger.c +++ b/src/backend/commands/event_trigger.c @@ -387,7 +387,7 @@ SetDatatabaseHasLoginEventTriggers(void) HeapTuple tuple; /* - * Use shared lock to prevent a conflit with EventTriggerOnLogin() trying + * Use shared lock to prevent a conflict with EventTriggerOnLogin() trying * to reset pg_database.dathasloginevt flag. Note, this lock doesn't * effectively blocks database or other objection. It's just custom lock * tag used to prevent multiple backends changing diff --git a/src/backend/executor/execMain.c b/src/backend/executor/execMain.c index 4c5a7bbf62..9539377139 100644 --- a/src/backend/executor/execMain.c +++ b/src/backend/executor/execMain.c @@ -1849,7 +1849,7 @@ ExecPartitionCheck(ResultRelInfo *resultRelInfo, TupleTableSlot *slot, econtext->ecxt_scantuple = slot; /* - * As in case of the catalogued constraints, we treat a NULL result as + * As in case of the cataloged constraints, we treat a NULL result as * success here, not a failure. */ success = ExecCheck(resultRelInfo->ri_PartitionCheckExpr, econtext); diff --git a/src/backend/optimizer/plan/initsplan.c b/src/backend/optimizer/plan/initsplan.c index 8295e7753d..b0f9e21474 100644 --- a/src/backend/optimizer/plan/initsplan.c +++ b/src/backend/optimizer/plan/initsplan.c @@ -1928,8 +1928,8 @@ deconstruct_distribute_oj_quals(PlannerInfo *root, * jtitems list to be ordered that way. * * We first strip out all the nullingrels bits corresponding to - * commutating joins below this one, and then successively put them - * back as we crawl up the join stack. + * commuting joins below this one, and then successively put them back + * as we crawl up the join stack. */ quals = jtitem->oj_joinclauses; if (!bms_is_empty(joins_below)) diff --git a/src/backend/utils/adt/rangetypes.c b/src/backend/utils/adt/rangetypes.c index 24bad52923..d3fc88ec2d 100644 --- a/src/backend/utils/adt/rangetypes.c +++ b/src/backend/utils/adt/rangetypes.c @@ -2608,7 +2608,7 @@ range_contains_elem_internal(TypeCacheEntry *typcache, const RangeType *r, Datum * values into a range object. They are modeled after heaptuple.c's * heap_compute_data_size() and heap_fill_tuple(), but we need not handle * null values here. TYPE_IS_PACKABLE must test the same conditions as - * heaptuple.c's ATT_IS_PACKABLE macro. See the comments thare for more + * heaptuple.c's ATT_IS_PACKABLE macro. See the comments there for more * details. */ diff --git a/src/backend/utils/cache/catcache.c b/src/backend/utils/cache/catcache.c index 2e2e4d9f1f..ccf39368a5 100644 --- a/src/backend/utils/cache/catcache.c +++ b/src/backend/utils/cache/catcache.c @@ -760,7 +760,7 @@ ResetCatalogCaches(void) * kinds of trouble if a cache flush occurs while loading cache entries. * We now avoid the need to do it by copying cc_tupdesc out of the relcache, * rather than relying on the relcache to keep a tupdesc for us. Of course - * this assumes the tupdesc of a cachable system table will not change...) + * this assumes the tupdesc of a cacheable system table will not change...) */ void CatalogCacheFlushCatalog(Oid catId) diff --git a/src/backend/utils/sort/tuplesortvariants.c b/src/backend/utils/sort/tuplesortvariants.c index 27425880a5..1aa2a3bb5b 100644 --- a/src/backend/utils/sort/tuplesortvariants.c +++ b/src/backend/utils/sort/tuplesortvariants.c @@ -93,7 +93,7 @@ static void readtup_datum(Tuplesortstate *state, SortTuple *stup, static void freestate_cluster(Tuplesortstate *state); /* - * Data struture pointed by "TuplesortPublic.arg" for the CLUSTER case. Set by + * Data structure pointed by "TuplesortPublic.arg" for the CLUSTER case. Set by * the tuplesort_begin_cluster. */ typedef struct @@ -105,7 +105,7 @@ typedef struct } TuplesortClusterArg; /* - * Data struture pointed by "TuplesortPublic.arg" for the IndexTuple case. + * Data structure pointed by "TuplesortPublic.arg" for the IndexTuple case. * Set by tuplesort_begin_index_xxx and used only by the IndexTuple routines. */ typedef struct @@ -115,7 +115,7 @@ typedef struct } TuplesortIndexArg; /* - * Data struture pointed by "TuplesortPublic.arg" for the index_btree subcase. + * Data structure pointed by "TuplesortPublic.arg" for the index_btree subcase. */ typedef struct { @@ -126,7 +126,7 @@ typedef struct } TuplesortIndexBTreeArg; /* - * Data struture pointed by "TuplesortPublic.arg" for the index_hash subcase. + * Data structure pointed by "TuplesortPublic.arg" for the index_hash subcase. */ typedef struct { @@ -138,7 +138,7 @@ typedef struct } TuplesortIndexHashArg; /* - * Data struture pointed by "TuplesortPublic.arg" for the Datum case. + * Data structure pointed by "TuplesortPublic.arg" for the Datum case. * Set by tuplesort_begin_datum and used only by the DatumTuple routines. */ typedef struct diff --git a/src/backend/utils/time/combocid.c b/src/backend/utils/time/combocid.c index 0e94bc93f7..192d9c1efc 100644 --- a/src/backend/utils/time/combocid.c +++ b/src/backend/utils/time/combocid.c @@ -4,7 +4,7 @@ * Combo command ID support routines * * Before version 8.3, HeapTupleHeaderData had separate fields for cmin - * and cmax. To reduce the header size, cmin and cmax are now overlayed + * and cmax. To reduce the header size, cmin and cmax are now overlaid * in the same field in the header. That usually works because you rarely * insert and delete a tuple in the same transaction, and we don't need * either field to remain valid after the originating transaction exits. diff --git a/src/bin/pg_rewind/t/001_basic.pl b/src/bin/pg_rewind/t/001_basic.pl index 842f6c7fbe..54cd00ca04 100644 --- a/src/bin/pg_rewind/t/001_basic.pl +++ b/src/bin/pg_rewind/t/001_basic.pl @@ -60,7 +60,7 @@ sub run_test # Insert a row in the old primary. This causes the primary and standby # to have "diverged", it's no longer possible to just apply the - # standy's logs over primary directory - you need to rewind. + # standby's logs over primary directory - you need to rewind. primary_psql("INSERT INTO tbl1 VALUES ('in primary, after promotion')"); # Also insert a new row in the standby, which won't be present in the diff --git a/src/bin/pg_rewind/t/004_pg_xlog_symlink.pl b/src/bin/pg_rewind/t/004_pg_xlog_symlink.pl index 7d1bb65cae..ad085d41ad 100644 --- a/src/bin/pg_rewind/t/004_pg_xlog_symlink.pl +++ b/src/bin/pg_rewind/t/004_pg_xlog_symlink.pl @@ -52,7 +52,7 @@ sub run_test # Insert a row in the old primary. This causes the primary and standby # to have "diverged", it's no longer possible to just apply the - # standy's logs over primary directory - you need to rewind. + # standby's logs over primary directory - you need to rewind. primary_psql("INSERT INTO tbl1 VALUES ('in primary, after promotion')"); # Also insert a new row in the standby, which won't be present in the diff --git a/src/bin/pg_rewind/t/007_standby_source.pl b/src/bin/pg_rewind/t/007_standby_source.pl index fab84a4bbb..47e8857198 100644 --- a/src/bin/pg_rewind/t/007_standby_source.pl +++ b/src/bin/pg_rewind/t/007_standby_source.pl @@ -86,7 +86,7 @@ # Insert a row in A. This causes A/B and C to have "diverged", so that it's -# no longer possible to just apply the standy's logs over primary directory +# no longer possible to just apply the standby's logs over primary directory # - you need to rewind. $node_a->safe_psql('postgres', "INSERT INTO tbl1 VALUES ('in A, after C was promoted')"); diff --git a/src/bin/pg_rewind/t/009_growing_files.pl b/src/bin/pg_rewind/t/009_growing_files.pl index 016f7736e7..c456a387b2 100644 --- a/src/bin/pg_rewind/t/009_growing_files.pl +++ b/src/bin/pg_rewind/t/009_growing_files.pl @@ -28,7 +28,7 @@ RewindTest::promote_standby(); # Insert a row in the old primary. This causes the primary and standby to have -# "diverged", it's no longer possible to just apply the standy's logs over +# "diverged", it's no longer possible to just apply the standby's logs over # primary directory - you need to rewind. Also insert a new row in the # standby, which won't be present in the old primary. primary_psql("INSERT INTO tbl1 VALUES ('in primary, after promotion')"); diff --git a/src/include/pg_config_manual.h b/src/include/pg_config_manual.h index 16c383ba7f..fd53732966 100644 --- a/src/include/pg_config_manual.h +++ b/src/include/pg_config_manual.h @@ -337,7 +337,7 @@ /* * Define this to force Bitmapset reallocation on each modification. Helps - * to find hangling pointers to Bitmapset's. + * to find dangling pointers to Bitmapset's. */ /* #define REALLOCATE_BITMAPSETS */ diff --git a/src/test/isolation/specs/stats.spec b/src/test/isolation/specs/stats.spec index 5b922d788c..a7daf2a49a 100644 --- a/src/test/isolation/specs/stats.spec +++ b/src/test/isolation/specs/stats.spec @@ -543,10 +543,10 @@ permutation s1_table_insert s1_begin s1_table_update_k1 # should *not* be counted, different rel - s1_table_update_k1 # dito + s1_table_update_k1 # ditto s1_table_truncate s1_table_insert_k1 # should be counted - s1_table_update_k1 # dito + s1_table_update_k1 # ditto s1_prepare_a s1_commit_prepared_a s1_ff @@ -557,10 +557,10 @@ permutation s1_table_insert s1_begin s1_table_update_k1 # should *not* be counted, different rel - s1_table_update_k1 # dito + s1_table_update_k1 # ditto s1_table_truncate s1_table_insert_k1 # should be counted - s1_table_update_k1 # dito + s1_table_update_k1 # ditto s1_prepare_a s1_ff # flush out non-transactional stats, might happen anyway s2_commit_prepared_a @@ -572,10 +572,10 @@ permutation s1_table_insert s1_begin s1_table_update_k1 # should be counted - s1_table_update_k1 # dito + s1_table_update_k1 # ditto s1_table_truncate s1_table_insert_k1 # should *not* be counted, different rel - s1_table_update_k1 # dito + s1_table_update_k1 # ditto s1_prepare_a s1_rollback_prepared_a s1_ff @@ -586,10 +586,10 @@ permutation s1_table_insert s1_begin s1_table_update_k1 # should be counted - s1_table_update_k1 # dito + s1_table_update_k1 # ditto s1_table_truncate s1_table_insert_k1 # should *not* be counted, different rel - s1_table_update_k1 # dito + s1_table_update_k1 # ditto s1_prepare_a s2_rollback_prepared_a s1_ff s2_ff diff --git a/src/test/recovery/t/029_stats_restart.pl b/src/test/recovery/t/029_stats_restart.pl index e350a5e8aa..56dc76609c 100644 --- a/src/test/recovery/t/029_stats_restart.pl +++ b/src/test/recovery/t/029_stats_restart.pl @@ -1,7 +1,7 @@ # Copyright (c) 2021-2023, PostgreSQL Global Development Group # Tests statistics handling around restarts, including handling of crashes and -# invalid stats files, as well as restorting stats after "normal" restarts. +# invalid stats files, as well as restarting stats after "normal" restarts. use strict; use warnings FATAL => 'all'; diff --git a/src/test/regress/expected/boolean.out b/src/test/regress/expected/boolean.out index ee9c244bf8..57d251eea7 100644 --- a/src/test/regress/expected/boolean.out +++ b/src/test/regress/expected/boolean.out @@ -486,7 +486,7 @@ FROM booltbl3 ORDER BY o; -- Test to make sure short-circuiting and NULL handling is -- correct. Use a table as source to prevent constant simplification --- to interfer. +-- from interfering. CREATE TABLE booltbl4(isfalse bool, istrue bool, isnul bool); INSERT INTO booltbl4 VALUES (false, true, null); \pset null '(null)' diff --git a/src/test/regress/expected/brin_multi.out b/src/test/regress/expected/brin_multi.out index 7df42865da..ae9ce9d8ec 100644 --- a/src/test/regress/expected/brin_multi.out +++ b/src/test/regress/expected/brin_multi.out @@ -826,11 +826,11 @@ RESET enable_seqscan; -- test overflows during CREATE INDEX with extreme timestamp values CREATE TABLE brin_timestamp_test(a TIMESTAMPTZ); SET datestyle TO iso; --- values close to timetamp minimum +-- values close to timestamp minimum INSERT INTO brin_timestamp_test SELECT '4713-01-01 00:00:01 BC'::timestamptz + (i || ' seconds')::interval FROM generate_series(1,30) s(i); --- values close to timetamp maximum +-- values close to timestamp maximum INSERT INTO brin_timestamp_test SELECT '294276-12-01 00:00:01'::timestamptz + (i || ' seconds')::interval FROM generate_series(1,30) s(i); diff --git a/src/test/regress/expected/join.out b/src/test/regress/expected/join.out index 1557e17299..e499a35791 100644 --- a/src/test/regress/expected/join.out +++ b/src/test/regress/expected/join.out @@ -6945,7 +6945,7 @@ WHERE q0.a = 1; (7 rows) -- ----- Only one side is unqiue +---- Only one side is unique --select * from sl t1, sl t2 where t1.a = t2.a and t1.b = 1; --select * from sl t1, sl t2 where t1.a = t2.a and t2.b = 1; -- diff --git a/src/test/regress/sql/boolean.sql b/src/test/regress/sql/boolean.sql index bc9937d692..5b9dcd2317 100644 --- a/src/test/regress/sql/boolean.sql +++ b/src/test/regress/sql/boolean.sql @@ -227,7 +227,7 @@ FROM booltbl3 ORDER BY o; -- Test to make sure short-circuiting and NULL handling is -- correct. Use a table as source to prevent constant simplification --- to interfer. +-- from interfering. CREATE TABLE booltbl4(isfalse bool, istrue bool, isnul bool); INSERT INTO booltbl4 VALUES (false, true, null); \pset null '(null)' diff --git a/src/test/regress/sql/brin_multi.sql b/src/test/regress/sql/brin_multi.sql index c5a8484584..55349b4e1f 100644 --- a/src/test/regress/sql/brin_multi.sql +++ b/src/test/regress/sql/brin_multi.sql @@ -592,12 +592,12 @@ CREATE TABLE brin_timestamp_test(a TIMESTAMPTZ); SET datestyle TO iso; --- values close to timetamp minimum +-- values close to timestamp minimum INSERT INTO brin_timestamp_test SELECT '4713-01-01 00:00:01 BC'::timestamptz + (i || ' seconds')::interval FROM generate_series(1,30) s(i); --- values close to timetamp maximum +-- values close to timestamp maximum INSERT INTO brin_timestamp_test SELECT '294276-12-01 00:00:01'::timestamptz + (i || ' seconds')::interval FROM generate_series(1,30) s(i); diff --git a/src/test/regress/sql/join.sql b/src/test/regress/sql/join.sql index fed9e83e31..4ac63edae1 100644 --- a/src/test/regress/sql/join.sql +++ b/src/test/regress/sql/join.sql @@ -2650,7 +2650,7 @@ SELECT * FROM WHERE q0.a = 1; -- ----- Only one side is unqiue +---- Only one side is unique --select * from sl t1, sl t2 where t1.a = t2.a and t1.b = 1; --select * from sl t1, sl t2 where t1.a = t2.a and t2.b = 1; -- -- 2.39.2
>From 0d6275c012e322b79e45aa0aa8e411e38c80d418 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Dagfinn=20Ilmari=20Manns=C3=A5ker?= <ilm...@ilmari.org> Date: Fri, 22 Dec 2023 00:42:57 +0000 Subject: [PATCH v2 02/12] tsquery: fix typo "rewrited" -> "rewritten" --- src/backend/utils/adt/tsquery_rewrite.c | 26 ++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/src/backend/utils/adt/tsquery_rewrite.c b/src/backend/utils/adt/tsquery_rewrite.c index 7e73635162..9d7a79cc71 100644 --- a/src/backend/utils/adt/tsquery_rewrite.c +++ b/src/backend/utils/adt/tsquery_rewrite.c @@ -281,7 +281,7 @@ tsquery_rewrite_query(PG_FUNCTION_ARGS) { TSQuery query = PG_GETARG_TSQUERY_COPY(0); text *in = PG_GETARG_TEXT_PP(1); - TSQuery rewrited = query; + TSQuery rewritten = query; MemoryContext outercontext = CurrentMemoryContext; MemoryContext oldcontext; QTNode *tree; @@ -293,7 +293,7 @@ tsquery_rewrite_query(PG_FUNCTION_ARGS) if (query->size == 0) { PG_FREE_IF_COPY(in, 1); - PG_RETURN_POINTER(rewrited); + PG_RETURN_POINTER(rewritten); } tree = QT2QTN(GETQUERY(query), GETOPERAND(query)); @@ -391,19 +391,19 @@ tsquery_rewrite_query(PG_FUNCTION_ARGS) if (tree) { QTNBinary(tree); - rewrited = QTN2QT(tree); + rewritten = QTN2QT(tree); QTNFree(tree); PG_FREE_IF_COPY(query, 0); } else { - SET_VARSIZE(rewrited, HDRSIZETQ); - rewrited->size = 0; + SET_VARSIZE(rewritten, HDRSIZETQ); + rewritten->size = 0; } pfree(buf); PG_FREE_IF_COPY(in, 1); - PG_RETURN_POINTER(rewrited); + PG_RETURN_POINTER(rewritten); } Datum @@ -412,7 +412,7 @@ tsquery_rewrite(PG_FUNCTION_ARGS) TSQuery query = PG_GETARG_TSQUERY_COPY(0); TSQuery ex = PG_GETARG_TSQUERY(1); TSQuery subst = PG_GETARG_TSQUERY(2); - TSQuery rewrited = query; + TSQuery rewritten = query; QTNode *tree, *qex, *subs = NULL; @@ -421,7 +421,7 @@ tsquery_rewrite(PG_FUNCTION_ARGS) { PG_FREE_IF_COPY(ex, 1); PG_FREE_IF_COPY(subst, 2); - PG_RETURN_POINTER(rewrited); + PG_RETURN_POINTER(rewritten); } tree = QT2QTN(GETQUERY(query), GETOPERAND(query)); @@ -442,21 +442,21 @@ tsquery_rewrite(PG_FUNCTION_ARGS) if (!tree) { - SET_VARSIZE(rewrited, HDRSIZETQ); - rewrited->size = 0; + SET_VARSIZE(rewritten, HDRSIZETQ); + rewritten->size = 0; PG_FREE_IF_COPY(ex, 1); PG_FREE_IF_COPY(subst, 2); - PG_RETURN_POINTER(rewrited); + PG_RETURN_POINTER(rewritten); } else { QTNBinary(tree); - rewrited = QTN2QT(tree); + rewritten = QTN2QT(tree); QTNFree(tree); } PG_FREE_IF_COPY(query, 0); PG_FREE_IF_COPY(ex, 1); PG_FREE_IF_COPY(subst, 2); - PG_RETURN_POINTER(rewrited); + PG_RETURN_POINTER(rewritten); } -- 2.39.2
>From c3e49c6d81d7825785c536b0b4ab8699d77a042e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Dagfinn=20Ilmari=20Manns=C3=A5ker?= <ilm...@ilmari.org> Date: Fri, 22 Dec 2023 00:43:55 +0000 Subject: [PATCH v2 03/12] gist: fix typo "split(t)ed" -> "split" --- src/backend/access/gist/gist.c | 14 +++++++------- src/backend/access/gist/gistbuild.c | 6 +++--- src/backend/access/gist/gistbuildbuffers.c | 8 ++++---- src/backend/access/gist/gistxlog.c | 4 ++-- src/include/access/gist_private.h | 14 +++++++------- src/include/access/gistxlog.h | 2 +- src/tools/pgindent/typedefs.list | 2 +- 7 files changed, 25 insertions(+), 25 deletions(-) diff --git a/src/backend/access/gist/gist.c b/src/backend/access/gist/gist.c index e052ba8bda..133676892e 100644 --- a/src/backend/access/gist/gist.c +++ b/src/backend/access/gist/gist.c @@ -44,7 +44,7 @@ static void gistprunepage(Relation rel, Page page, Buffer buffer, #define ROTATEDIST(d) do { \ - SplitedPageLayout *tmp=(SplitedPageLayout*)palloc0(sizeof(SplitedPageLayout)); \ + SplitPageLayout *tmp=(SplitPageLayout*)palloc0(sizeof(SplitPageLayout)); \ tmp->block.blkno = InvalidBlockNumber; \ tmp->buffer = InvalidBuffer; \ tmp->next = (d); \ @@ -283,11 +283,11 @@ gistplacetopage(Relation rel, Size freespace, GISTSTATE *giststate, /* no space for insertion */ IndexTuple *itvec; int tlen; - SplitedPageLayout *dist = NULL, + SplitPageLayout *dist = NULL, *ptr; BlockNumber oldrlink = InvalidBlockNumber; GistNSN oldnsn = 0; - SplitedPageLayout rootpg; + SplitPageLayout rootpg; bool is_rootsplit; int npage; @@ -1080,7 +1080,7 @@ gistFindCorrectParent(Relation r, GISTInsertStack *child, bool is_build) { /* * End of chain and still didn't find parent. It's a very-very - * rare situation when root splitted. + * rare situation when the root was split. */ break; } @@ -1435,7 +1435,7 @@ gistfinishsplit(GISTInsertState *state, GISTInsertStack *stack, * used for XLOG and real writes buffers. Function is recursive, ie * it will split page until keys will fit in every page. */ -SplitedPageLayout * +SplitPageLayout * gistSplit(Relation r, Page page, IndexTuple *itup, /* contains compressed entry */ @@ -1446,7 +1446,7 @@ gistSplit(Relation r, *rvectup; GistSplitVector v; int i; - SplitedPageLayout *res = NULL; + SplitPageLayout *res = NULL; /* this should never recurse very deeply, but better safe than sorry */ check_stack_depth(); @@ -1496,7 +1496,7 @@ gistSplit(Relation r, if (!gistfitpage(lvectup, v.splitVector.spl_nleft)) { - SplitedPageLayout *resptr, + SplitPageLayout *resptr, *subres; resptr = subres = gistSplit(r, page, lvectup, v.splitVector.spl_nleft, giststate); diff --git a/src/backend/access/gist/gistbuild.c b/src/backend/access/gist/gistbuild.c index cacf50b269..a59a4570e1 100644 --- a/src/backend/access/gist/gistbuild.c +++ b/src/backend/access/gist/gistbuild.c @@ -527,7 +527,7 @@ gist_indexsortbuild_levelstate_flush(GISTBuildState *state, BlockNumber blkno; MemoryContext oldCtx; IndexTuple union_tuple; - SplitedPageLayout *dist; + SplitPageLayout *dist; IndexTuple *itvec; int vect_len; bool isleaf = GistPageIsLeaf(levelstate->pages[0]); @@ -555,8 +555,8 @@ gist_indexsortbuild_levelstate_flush(GISTBuildState *state, } else { - /* Create splitted layout from single page */ - dist = (SplitedPageLayout *) palloc0(sizeof(SplitedPageLayout)); + /* Create split layout from single page */ + dist = (SplitPageLayout *) palloc0(sizeof(SplitPageLayout)); union_tuple = gistunion(state->indexrel, itvec, vect_len, state->giststate); dist->itup = union_tuple; diff --git a/src/backend/access/gist/gistbuildbuffers.c b/src/backend/access/gist/gistbuildbuffers.c index 1423b4b047..fb5e2ffcb3 100644 --- a/src/backend/access/gist/gistbuildbuffers.c +++ b/src/backend/access/gist/gistbuildbuffers.c @@ -163,7 +163,7 @@ gistGetNodeBuffer(GISTBuildBuffers *gfbb, GISTSTATE *giststate, * not arbitrary that the new buffer is put to the beginning of the * list: in the final emptying phase we loop through all buffers at * each level, and flush them. If a page is split during the emptying, - * it's more efficient to flush the new splitted pages first, before + * it's more efficient to flush the new split pages first, before * moving on to pre-existing pages on the level. The buffers just * created during the page split are likely still in cache, so * flushing them immediately is more efficient than putting them to @@ -518,7 +518,7 @@ gistFreeBuildBuffers(GISTBuildBuffers *gfbb) /* * Data structure representing information about node buffer for index tuples - * relocation from splitted node buffer. + * relocation from split node buffer. */ typedef struct { @@ -549,12 +549,12 @@ gistRelocateBuildBuffersOnSplit(GISTBuildBuffers *gfbb, GISTSTATE *giststate, GISTNodeBuffer oldBuf; ListCell *lc; - /* If the splitted page doesn't have buffers, we have nothing to do. */ + /* If the split page doesn't have buffers, we have nothing to do. */ if (!LEVEL_HAS_BUFFERS(level, gfbb)) return; /* - * Get the node buffer of the splitted page. + * Get the node buffer of the split page. */ blocknum = BufferGetBlockNumber(buffer); nodeBuffer = hash_search(gfbb->nodeBuffersTab, &blocknum, diff --git a/src/backend/access/gist/gistxlog.c b/src/backend/access/gist/gistxlog.c index 15249aa921..77e5954e7b 100644 --- a/src/backend/access/gist/gistxlog.c +++ b/src/backend/access/gist/gistxlog.c @@ -495,12 +495,12 @@ gist_mask(char *pagedata, BlockNumber blkno) */ XLogRecPtr gistXLogSplit(bool page_is_leaf, - SplitedPageLayout *dist, + SplitPageLayout *dist, BlockNumber origrlink, GistNSN orignsn, Buffer leftchildbuf, bool markfollowright) { gistxlogPageSplit xlrec; - SplitedPageLayout *ptr; + SplitPageLayout *ptr; int npage = 0; XLogRecPtr recptr; int i; diff --git a/src/include/access/gist_private.h b/src/include/access/gist_private.h index 82eb7b4bd8..ed183d375d 100644 --- a/src/include/access/gist_private.h +++ b/src/include/access/gist_private.h @@ -187,8 +187,8 @@ typedef struct gistxlogPage int num; /* number of index tuples following */ } gistxlogPage; -/* SplitedPageLayout - gistSplit function result */ -typedef struct SplitedPageLayout +/* SplitPageLayout - gistSplit function result */ +typedef struct SplitPageLayout { gistxlogPage block; IndexTupleData *list; @@ -197,8 +197,8 @@ typedef struct SplitedPageLayout Page page; /* to operate */ Buffer buffer; /* to write after all proceed */ - struct SplitedPageLayout *next; -} SplitedPageLayout; + struct SplitPageLayout *next; +} SplitPageLayout; /* * GISTInsertStack used for locking buffers and transfer arguments during @@ -432,8 +432,8 @@ extern bool gistplacetopage(Relation rel, Size freespace, GISTSTATE *giststate, Relation heapRel, bool is_build); -extern SplitedPageLayout *gistSplit(Relation r, Page page, IndexTuple *itup, - int len, GISTSTATE *giststate); +extern SplitPageLayout *gistSplit(Relation r, Page page, IndexTuple *itup, + int len, GISTSTATE *giststate); /* gistxlog.c */ extern XLogRecPtr gistXLogPageDelete(Buffer buffer, @@ -453,7 +453,7 @@ extern XLogRecPtr gistXLogDelete(Buffer buffer, OffsetNumber *todelete, Relation heaprel); extern XLogRecPtr gistXLogSplit(bool page_is_leaf, - SplitedPageLayout *dist, + SplitPageLayout *dist, BlockNumber origrlink, GistNSN orignsn, Buffer leftchildbuf, bool markfollowright); diff --git a/src/include/access/gistxlog.h b/src/include/access/gistxlog.h index aff2ffbdcc..988db12e39 100644 --- a/src/include/access/gistxlog.h +++ b/src/include/access/gistxlog.h @@ -69,7 +69,7 @@ typedef struct gistxlogPageSplit { BlockNumber origrlink; /* rightlink of the page before split */ GistNSN orignsn; /* NSN of the page before split */ - bool origleaf; /* was splitted page a leaf page? */ + bool origleaf; /* was split page a leaf page? */ uint16 npage; /* # of pages in the split */ bool markfollowright; /* set F_FOLLOW_RIGHT flags */ diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list index ee2ad7aa45..9f6e9bdc4c 100644 --- a/src/tools/pgindent/typedefs.list +++ b/src/tools/pgindent/typedefs.list @@ -2634,10 +2634,10 @@ SpecialJoinInfo SpinDelayStatus SplitInterval SplitLR +SplitPageLayout SplitPoint SplitTextOutputData SplitVar -SplitedPageLayout StackElem StartDataPtrType StartLOPtrType -- 2.39.2
>From e4bbe1613a3924f29209f2dd4346cf4ff1d88fa1 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Dagfinn=20Ilmari=20Manns=C3=A5ker?= <ilm...@ilmari.org> Date: Fri, 22 Dec 2023 00:47:06 +0000 Subject: [PATCH v2 04/12] libpq: fix typo "occurences" -> "occurrences" in tests --- .../libpq/t/003_load_balance_host_list.pl | 18 +++++++++--------- src/interfaces/libpq/t/004_load_balance_dns.pl | 18 +++++++++--------- 2 files changed, 18 insertions(+), 18 deletions(-) diff --git a/src/interfaces/libpq/t/003_load_balance_host_list.pl b/src/interfaces/libpq/t/003_load_balance_host_list.pl index c6fe049fe5..6d820bec2b 100644 --- a/src/interfaces/libpq/t/003_load_balance_host_list.pl +++ b/src/interfaces/libpq/t/003_load_balance_host_list.pl @@ -51,20 +51,20 @@ sql => "SELECT 'connect2'"); } -my $node1_occurences = () = +my $node1_occurrences = () = $node1->log_content() =~ /statement: SELECT 'connect2'/g; -my $node2_occurences = () = +my $node2_occurrences = () = $node2->log_content() =~ /statement: SELECT 'connect2'/g; -my $node3_occurences = () = +my $node3_occurrences = () = $node3->log_content() =~ /statement: SELECT 'connect2'/g; -my $total_occurences = - $node1_occurences + $node2_occurences + $node3_occurences; +my $total_occurrences = + $node1_occurrences + $node2_occurrences + $node3_occurrences; -ok($node1_occurences > 1, "received at least one connection on node1"); -ok($node2_occurences > 1, "received at least one connection on node2"); -ok($node3_occurences > 1, "received at least one connection on node3"); -ok($total_occurences == 50, "received 50 connections across all nodes"); +ok($node1_occurrences > 1, "received at least one connection on node1"); +ok($node2_occurrences > 1, "received at least one connection on node2"); +ok($node3_occurrences > 1, "received at least one connection on node3"); +ok($total_occurrences == 50, "received 50 connections across all nodes"); $node1->stop(); $node2->stop(); diff --git a/src/interfaces/libpq/t/004_load_balance_dns.pl b/src/interfaces/libpq/t/004_load_balance_dns.pl index 49f1f5f331..977b67ff7e 100644 --- a/src/interfaces/libpq/t/004_load_balance_dns.pl +++ b/src/interfaces/libpq/t/004_load_balance_dns.pl @@ -101,20 +101,20 @@ sql => "SELECT 'connect2'"); } -my $node1_occurences = () = +my $node1_occurrences = () = $node1->log_content() =~ /statement: SELECT 'connect2'/g; -my $node2_occurences = () = +my $node2_occurrences = () = $node2->log_content() =~ /statement: SELECT 'connect2'/g; -my $node3_occurences = () = +my $node3_occurrences = () = $node3->log_content() =~ /statement: SELECT 'connect2'/g; -my $total_occurences = - $node1_occurences + $node2_occurences + $node3_occurences; +my $total_occurrences = + $node1_occurrences + $node2_occurrences + $node3_occurrences; -ok($node1_occurences > 1, "received at least one connection on node1"); -ok($node2_occurences > 1, "received at least one connection on node2"); -ok($node3_occurences > 1, "received at least one connection on node3"); -ok($total_occurences == 50, "received 50 connections across all nodes"); +ok($node1_occurrences > 1, "received at least one connection on node1"); +ok($node2_occurrences > 1, "received at least one connection on node2"); +ok($node3_occurrences > 1, "received at least one connection on node3"); +ok($total_occurrences == 50, "received 50 connections across all nodes"); $node1->stop(); $node2->stop(); -- 2.39.2
>From 3b8c216dda4c79984b4dd9a43bafce8452395a04 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Dagfinn=20Ilmari=20Manns=C3=A5ker?= <ilm...@ilmari.org> Date: Fri, 22 Dec 2023 01:31:54 +0000 Subject: [PATCH v2 05/12] jsonpath_exec: fix typo "absense" -> "absence" --- src/backend/utils/adt/jsonpath_exec.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/src/backend/utils/adt/jsonpath_exec.c b/src/backend/utils/adt/jsonpath_exec.c index 2d0599b4aa..9a09604f64 100644 --- a/src/backend/utils/adt/jsonpath_exec.c +++ b/src/backend/utils/adt/jsonpath_exec.c @@ -153,7 +153,7 @@ typedef struct JsonValueListIterator } JsonValueListIterator; /* strict/lax flags is decomposed into four [un]wrap/error flags */ -#define jspStrictAbsenseOfErrors(cxt) (!(cxt)->laxMode) +#define jspStrictAbsenceOfErrors(cxt) (!(cxt)->laxMode) #define jspAutoUnwrap(cxt) ((cxt)->laxMode) #define jspAutoWrap(cxt) ((cxt)->laxMode) #define jspIgnoreStructuralErrors(cxt) ((cxt)->ignoreStructuralErrors) @@ -570,7 +570,7 @@ executeJsonPath(JsonPath *path, Jsonb *vars, Jsonb *json, bool throwErrors, cxt.throwErrors = throwErrors; cxt.useTz = useTz; - if (jspStrictAbsenseOfErrors(&cxt) && !result) + if (jspStrictAbsenceOfErrors(&cxt) && !result) { /* * In strict mode we must get a complete list of values to check that @@ -1318,7 +1318,7 @@ executeBoolItem(JsonPathExecContext *cxt, JsonPathItem *jsp, case jpiExists: jspGetArg(jsp, &larg); - if (jspStrictAbsenseOfErrors(cxt)) + if (jspStrictAbsenceOfErrors(cxt)) { /* * In strict mode we must get a complete list of values to @@ -1516,14 +1516,14 @@ executePredicate(JsonPathExecContext *cxt, JsonPathItem *pred, if (res == jpbUnknown) { - if (jspStrictAbsenseOfErrors(cxt)) + if (jspStrictAbsenceOfErrors(cxt)) return jpbUnknown; error = true; } else if (res == jpbTrue) { - if (!jspStrictAbsenseOfErrors(cxt)) + if (!jspStrictAbsenceOfErrors(cxt)) return jpbTrue; found = true; -- 2.39.2
>From 979d66a318ef555b234cbdc2b8e96cc306954332 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Dagfinn=20Ilmari=20Manns=C3=A5ker?= <ilm...@ilmari.org> Date: Fri, 22 Dec 2023 01:31:33 +0000 Subject: [PATCH v2 06/12] jsonpath_gram: fix typo "indexs" -> "indices" --- src/backend/utils/adt/jsonpath_gram.y | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/src/backend/utils/adt/jsonpath_gram.y b/src/backend/utils/adt/jsonpath_gram.y index adc259d5bf..95666b9dd3 100644 --- a/src/backend/utils/adt/jsonpath_gram.y +++ b/src/backend/utils/adt/jsonpath_gram.y @@ -67,7 +67,7 @@ static bool makeItemLikeRegex(JsonPathParseItem *expr, { JsonPathString str; List *elems; /* list of JsonPathParseItem */ - List *indexs; /* list of integers */ + List *indices; /* list of integers */ JsonPathParseItem *value; JsonPathParseResult *result; JsonPathItemType optype; @@ -92,7 +92,7 @@ static bool makeItemLikeRegex(JsonPathParseItem *expr, %type <elems> accessor_expr -%type <indexs> index_list +%type <indices> index_list %type <optype> comp_op method -- 2.39.2
>From 6cfc614ed31650e02884dd2f7f0618d46166d99e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Dagfinn=20Ilmari=20Manns=C3=A5ker?= <ilm...@ilmari.org> Date: Fri, 22 Dec 2023 02:07:53 +0000 Subject: [PATCH v2 07/12] pg_archivecleanup: fix typo "extention" -> "extension" in help message --- src/bin/pg_archivecleanup/pg_archivecleanup.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/bin/pg_archivecleanup/pg_archivecleanup.c b/src/bin/pg_archivecleanup/pg_archivecleanup.c index 2c3b301f3b..07bf356b70 100644 --- a/src/bin/pg_archivecleanup/pg_archivecleanup.c +++ b/src/bin/pg_archivecleanup/pg_archivecleanup.c @@ -265,7 +265,7 @@ usage(void) printf(_(" -n, --dry-run dry run, show the names of the files that would be\n" " removed\n")); printf(_(" -V, --version output version information, then exit\n")); - printf(_(" -x, --strip-extension=EXT strip this extention before identifying files for\n" + printf(_(" -x, --strip-extension=EXT strip this extension before identifying files for\n" " clean up\n")); printf(_(" -?, --help show this help, then exit\n")); printf(_("\n" -- 2.39.2
>From 10a3970a8c7920954c74ee5424776fd614ce3659 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Dagfinn=20Ilmari=20Manns=C3=A5ker?= <ilm...@ilmari.org> Date: Fri, 22 Dec 2023 01:46:41 +0000 Subject: [PATCH v2 08/12] ecpg: fix typo in get_dtype return value for ECPGd_count All the other enum values return a string matching the enum label, but this has had a trailing r since the function was added in commit 339a5bbfb17ecd171ebe076c5bf016c4e66e2c0a If this case were actually hit in get_dtype() the generated C code would fail to compile, so it doesn't seem to be an actual live bug. --- src/interfaces/ecpg/preproc/type.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/interfaces/ecpg/preproc/type.c b/src/interfaces/ecpg/preproc/type.c index 91adb89de9..a842bb6a1f 100644 --- a/src/interfaces/ecpg/preproc/type.c +++ b/src/interfaces/ecpg/preproc/type.c @@ -695,7 +695,7 @@ get_dtype(enum ECPGdtype type) switch (type) { case ECPGd_count: - return "ECPGd_countr"; + return "ECPGd_count"; break; case ECPGd_data: return "ECPGd_data"; -- 2.39.2
>From f27b424a66eb790820cd61438f88fcd93f0fc56f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Dagfinn=20Ilmari=20Manns=C3=A5ker?= <ilm...@ilmari.org> Date: Fri, 22 Dec 2023 01:56:09 +0000 Subject: [PATCH v2 09/12] ci: fix typo in macports check: "superfluos" -> "superfluous" --- src/tools/ci/ci_macports_packages.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/tools/ci/ci_macports_packages.sh b/src/tools/ci/ci_macports_packages.sh index 4bc594a31d..f87256e090 100755 --- a/src/tools/ci/ci_macports_packages.sh +++ b/src/tools/ci/ci_macports_packages.sh @@ -66,7 +66,7 @@ fi # check if any ports should be uninstalled if [ -n "$(port -q installed rleaves)" ] ; then - echo superflous packages installed + echo superfluous packages installed update_cached_image=1 sudo port uninstall --follow-dependencies rleaves -- 2.39.2
>From 1ffd057aa12040e9b7cb278affb234454f0876f6 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Dagfinn=20Ilmari=20Manns=C3=A5ker?= <ilm...@ilmari.org> Date: Fri, 22 Dec 2023 02:00:04 +0000 Subject: [PATCH v2 10/12] doc: fix typo "vertexes" -> "vertices" The "vertexes" spelling is also valid, but we consistently use "vertices" elsewhere. --- doc/src/sgml/datatype.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgml index e4a7b07033..b3a92b9aab 100644 --- a/doc/src/sgml/datatype.sgml +++ b/doc/src/sgml/datatype.sgml @@ -3590,7 +3590,7 @@ </indexterm> <para> - Polygons are represented by lists of points (the vertexes of the + Polygons are represented by lists of points (the vertices of the polygon). Polygons are very similar to closed paths; the essential difference is that a polygon is considered to include the area within it, while a path is not. -- 2.39.2
>From 68724d876f14229e973ca816b26a06373f658327 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Dagfinn=20Ilmari=20Manns=C3=A5ker?= <ilm...@ilmari.org> Date: Wed, 27 Dec 2023 20:30:28 +0000 Subject: [PATCH v2 11/12] doc: fix typo "formattings" in rangedypes docs --- doc/src/sgml/rangetypes.sgml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/src/sgml/rangetypes.sgml b/doc/src/sgml/rangetypes.sgml index 92ea0e83da..7826865740 100644 --- a/doc/src/sgml/rangetypes.sgml +++ b/doc/src/sgml/rangetypes.sgml @@ -412,7 +412,7 @@ for example the integer ranges <literal>[1, 7]</literal> and <literal>[1, 8)</literal>, must be identical. It doesn't matter which representation you choose to be the canonical one, so long as two equivalent values with - different formattings are always mapped to the same value with the same + different formatting are always mapped to the same value with the same formatting. In addition to adjusting the inclusive/exclusive bounds format, a canonicalization function might round off boundary values, in case the desired step size is larger than what the subtype is capable of -- 2.39.2