On Thu, 18 Aug 2022 at 17:16, Justin Pryzby <pry...@telsasoft.com> wrote:
>
> On Thu, Aug 18, 2022 at 03:17:33PM +1200, David Rowley wrote:
> > I'm probably not the only committer to want to run a mile when they
> > see someone posting 17 or 26 patches in an email. So maybe "bang for
> > buck" is a better method for getting the ball rolling here.  As you
> > know, I was recently bitten by local shadows in af7d270dd, so I do
> > believe in the cause.
> >
> > What do you think?
>
> You already fixed the shadow var introduced in master/pg16, and I sent patches
> for the shadow vars added in pg15 (marked as such and presented as 001-008), 
> so
> perhaps it's okay to start with that ?

Alright,  I made a pass over the 0001-0008 patches.

0001. I'd also rather see these 4 renamed:

+++ b/src/bin/pg_dump/pg_dump.c
@@ -3144,7 +3144,6 @@ dumpDatabase(Archive *fout)
  PQExpBuffer loHorizonQry = createPQExpBuffer();
  int i_relfrozenxid,
  i_relfilenode,
- i_oid,
  i_relminmxid;

Adding an extra 'i' (for inner) on the front seems fine to me.

0002. I don't really like the "my" name. I also see you've added the
word "this" to many other variables that are shadowing. It feels kinda
like you're missing a "self" and a "me" in there somewhere! :)

@@ -7080,21 +7080,21 @@ getConstraints(Archive *fout, TableInfo
tblinfo[], int numTables)
  appendPQExpBufferChar(tbloids, '{');
  for (int i = 0; i < numTables; i++)
  {
- TableInfo  *tbinfo = &tblinfo[i];
+ TableInfo  *mytbinfo = &tblinfo[i];

How about just "tinfo"?

0003. The following is used for the exact same purpose as its shadowed
counterpart. I suggest just using the variable from the outer scope.

@@ -16799,21 +16799,21 @@ dumpSequence(Archive *fout, const TableInfo *tbinfo)
  */
  if (OidIsValid(tbinfo->owning_tab) && !tbinfo->is_identity_sequence)
  {
- TableInfo  *owning_tab = findTableByOid(tbinfo->owning_tab);
+ TableInfo  *this_owning_tab = findTableByOid(tbinfo->owning_tab);

0004. I would rather people used foreach_current_index(lc) > 0 to
determine when we're not doing the first iteration of a foreach loop.
I understand there are more complex cases with filtering that this
cannot be done, but these are highly simple and using
foreach_current_index() removes multiple lines of code and makes it
look nicer.

@@ -762,8 +762,8 @@ fetch_remote_table_info(char *nspname, char *relname,
  TupleTableSlot *slot;
  Oid attrsRow[] = {INT2VECTOROID};
  StringInfoData pub_names;
- bool first = true;

+ first = true;
  initStringInfo(&pub_names);
  foreach(lc, MySubscription->publications)

0005. How about just "tslot". I'm not a fan of "this".

+++ b/src/backend/replication/logical/tablesync.c
@@ -759,7 +759,7 @@ fetch_remote_table_info(char *nspname, char *relname,
  if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 150000)
  {
  WalRcvExecResult *pubres;
- TupleTableSlot *slot;
+ TupleTableSlot *thisslot;

0006. A see the outer shadowed counterpart is used to add a new backup
type.  Since I'm not a fan of "this", how about the outer one gets
renamed to "newtype"?

+++ b/src/backend/backup/basebackup_target.c
@@ -73,9 +73,9 @@ BaseBackupAddTarget(char *name,
  /* Search the target type list for an existing entry with this name. */
  foreach(lc, BaseBackupTargetTypeList)
  {
- BaseBackupTargetType *ttype = lfirst(lc);
+ BaseBackupTargetType *this_ttype = lfirst(lc);

0007. Meh, more "this". How about just "col".

+++ b/src/backend/parser/parse_jsontable.c
@@ -341,13 +341,13 @@ transformJsonTableChildPlan(JsonTableContext
*cxt, JsonTablePlan *plan,
  /* transform all nested columns into cross/union join */
  foreach(lc, columns)
  {
- JsonTableColumn *jtc = castNode(JsonTableColumn, lfirst(lc));
+ JsonTableColumn *thisjtc = castNode(JsonTableColumn, lfirst(lc));

There's a discussion about reverting this entire patch. Not sure if
patching master and not backpatching to pg15 would be useful to the
people who may be doing that revert.

0008. Sorry, I had to change this one too. I just have an aversion to
variables named "temp" or "tmp".

+++ b/src/backend/utils/adt/jsonpath_exec.c
@@ -3109,10 +3109,10 @@ JsonItemFromDatum(Datum val, Oid typid, int32
typmod, JsonbValue *res)

  if (JsonContainerIsScalar(&jb->root))
  {
- bool res PG_USED_FOR_ASSERTS_ONLY;
+ bool tmp PG_USED_FOR_ASSERTS_ONLY;

- res = JsonbExtractScalar(&jb->root, jbv);
- Assert(res);
+ tmp = JsonbExtractScalar(&jb->root, jbv);
+ Assert(tmp);

I've attached a patch which does things more along the lines of how I
would have done it.  I don't think we should be back patching this
stuff.

Any objections to pushing this to master only?

David
diff --git a/src/backend/backup/basebackup_target.c 
b/src/backend/backup/basebackup_target.c
index 83928e3205..f280660a03 100644
--- a/src/backend/backup/basebackup_target.c
+++ b/src/backend/backup/basebackup_target.c
@@ -62,7 +62,7 @@ BaseBackupAddTarget(char *name,
                                        void *(*check_detail) (char *, char *),
                                        bbsink *(*get_sink) (bbsink *, void *))
 {
-       BaseBackupTargetType *ttype;
+       BaseBackupTargetType *newtype;
        MemoryContext oldcontext;
        ListCell   *lc;
 
@@ -96,11 +96,11 @@ BaseBackupAddTarget(char *name,
         * name into a newly-allocated chunk of memory.
         */
        oldcontext = MemoryContextSwitchTo(TopMemoryContext);
-       ttype = palloc(sizeof(BaseBackupTargetType));
-       ttype->name = pstrdup(name);
-       ttype->check_detail = check_detail;
-       ttype->get_sink = get_sink;
-       BaseBackupTargetTypeList = lappend(BaseBackupTargetTypeList, ttype);
+       newtype = palloc(sizeof(BaseBackupTargetType));
+       newtype->name = pstrdup(name);
+       newtype->check_detail = check_detail;
+       newtype->get_sink = get_sink;
+       BaseBackupTargetTypeList = lappend(BaseBackupTargetTypeList, newtype);
        MemoryContextSwitchTo(oldcontext);
 }
 
diff --git a/src/backend/parser/parse_jsontable.c 
b/src/backend/parser/parse_jsontable.c
index bc3272017e..3e94071248 100644
--- a/src/backend/parser/parse_jsontable.c
+++ b/src/backend/parser/parse_jsontable.c
@@ -341,13 +341,13 @@ transformJsonTableChildPlan(JsonTableContext *cxt, 
JsonTablePlan *plan,
                /* transform all nested columns into cross/union join */
                foreach(lc, columns)
                {
-                       JsonTableColumn *jtc = castNode(JsonTableColumn, 
lfirst(lc));
+                       JsonTableColumn *col = castNode(JsonTableColumn, 
lfirst(lc));
                        Node       *node;
 
-                       if (jtc->coltype != JTC_NESTED)
+                       if (col->coltype != JTC_NESTED)
                                continue;
 
-                       node = transformNestedJsonTableColumn(cxt, jtc, plan);
+                       node = transformNestedJsonTableColumn(cxt, col, plan);
 
                        /* join transformed node with previous sibling nodes */
                        res = res ? makeJsonTableSiblingJoin(cross, res, node) 
: node;
diff --git a/src/backend/replication/logical/tablesync.c 
b/src/backend/replication/logical/tablesync.c
index bfcb80b495..d37d8a0d74 100644
--- a/src/backend/replication/logical/tablesync.c
+++ b/src/backend/replication/logical/tablesync.c
@@ -707,7 +707,6 @@ fetch_remote_table_info(char *nspname, char *relname,
        bool            isnull;
        int                     natt;
        ListCell   *lc;
-       bool            first;
        Bitmapset  *included_cols = NULL;
 
        lrel->nspname = nspname;
@@ -759,18 +758,15 @@ fetch_remote_table_info(char *nspname, char *relname,
        if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 150000)
        {
                WalRcvExecResult *pubres;
-               TupleTableSlot *slot;
+               TupleTableSlot *tslot;
                Oid                     attrsRow[] = {INT2VECTOROID};
                StringInfoData pub_names;
-               bool            first = true;
-
                initStringInfo(&pub_names);
                foreach(lc, MySubscription->publications)
                {
-                       if (!first)
+                       if (foreach_current_index(lc) > 0)
                                appendStringInfo(&pub_names, ", ");
                        appendStringInfoString(&pub_names, 
quote_literal_cstr(strVal(lfirst(lc))));
-                       first = false;
                }
 
                /*
@@ -819,10 +815,10 @@ fetch_remote_table_info(char *nspname, char *relname,
                 * If we find a NULL value, it means all the columns should be
                 * replicated.
                 */
-               slot = MakeSingleTupleTableSlot(pubres->tupledesc, 
&TTSOpsMinimalTuple);
-               if (tuplestore_gettupleslot(pubres->tuplestore, true, false, 
slot))
+               tslot = MakeSingleTupleTableSlot(pubres->tupledesc, 
&TTSOpsMinimalTuple);
+               if (tuplestore_gettupleslot(pubres->tuplestore, true, false, 
tslot))
                {
-                       Datum           cfval = slot_getattr(slot, 1, &isnull);
+                       Datum           cfval = slot_getattr(tslot, 1, &isnull);
 
                        if (!isnull)
                        {
@@ -838,9 +834,9 @@ fetch_remote_table_info(char *nspname, char *relname,
                                        included_cols = 
bms_add_member(included_cols, elems[natt]);
                        }
 
-                       ExecClearTuple(slot);
+                       ExecClearTuple(tslot);
                }
-               ExecDropSingleTupleTableSlot(slot);
+               ExecDropSingleTupleTableSlot(tslot);
 
                walrcv_clear_result(pubres);
 
@@ -950,14 +946,11 @@ fetch_remote_table_info(char *nspname, char *relname,
 
                /* Build the pubname list. */
                initStringInfo(&pub_names);
-               first = true;
                foreach(lc, MySubscription->publications)
                {
                        char       *pubname = strVal(lfirst(lc));
 
-                       if (first)
-                               first = false;
-                       else
+                       if (foreach_current_index(lc) > 0)
                                appendStringInfoString(&pub_names, ", ");
 
                        appendStringInfoString(&pub_names, 
quote_literal_cstr(pubname));
diff --git a/src/backend/utils/adt/jsonpath_exec.c 
b/src/backend/utils/adt/jsonpath_exec.c
index 5b6a480572..9c381ae727 100644
--- a/src/backend/utils/adt/jsonpath_exec.c
+++ b/src/backend/utils/adt/jsonpath_exec.c
@@ -3109,10 +3109,10 @@ JsonItemFromDatum(Datum val, Oid typid, int32 typmod, 
JsonbValue *res)
 
                                if (JsonContainerIsScalar(&jb->root))
                                {
-                                       bool            res 
PG_USED_FOR_ASSERTS_ONLY;
+                                       bool            result 
PG_USED_FOR_ASSERTS_ONLY;
 
-                                       res = JsonbExtractScalar(&jb->root, 
jbv);
-                                       Assert(res);
+                                       result = JsonbExtractScalar(&jb->root, 
jbv);
+                                       Assert(result);
                                }
                                else
                                        JsonbInitBinary(jbv, jb);
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index da6605175a..2c68915732 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -3142,10 +3142,10 @@ dumpDatabase(Archive *fout)
                PQExpBuffer loFrozenQry = createPQExpBuffer();
                PQExpBuffer loOutQry = createPQExpBuffer();
                PQExpBuffer loHorizonQry = createPQExpBuffer();
-               int                     i_relfrozenxid,
-                                       i_relfilenode,
-                                       i_oid,
-                                       i_relminmxid;
+               int                     ii_relfrozenxid,
+                                       ii_relfilenode,
+                                       ii_oid,
+                                       ii_relminmxid;
 
                /*
                 * pg_largeobject
@@ -3163,10 +3163,10 @@ dumpDatabase(Archive *fout)
 
                lo_res = ExecuteSqlQuery(fout, loFrozenQry->data, 
PGRES_TUPLES_OK);
 
-               i_relfrozenxid = PQfnumber(lo_res, "relfrozenxid");
-               i_relminmxid = PQfnumber(lo_res, "relminmxid");
-               i_relfilenode = PQfnumber(lo_res, "relfilenode");
-               i_oid = PQfnumber(lo_res, "oid");
+               ii_relfrozenxid = PQfnumber(lo_res, "relfrozenxid");
+               ii_relminmxid = PQfnumber(lo_res, "relminmxid");
+               ii_relfilenode = PQfnumber(lo_res, "relfilenode");
+               ii_oid = PQfnumber(lo_res, "oid");
 
                appendPQExpBufferStr(loHorizonQry, "\n-- For binary upgrade, 
set pg_largeobject relfrozenxid and relminmxid\n");
                appendPQExpBufferStr(loOutQry, "\n-- For binary upgrade, 
preserve pg_largeobject and index relfilenodes\n");
@@ -3178,12 +3178,12 @@ dumpDatabase(Archive *fout)
                        appendPQExpBuffer(loHorizonQry, "UPDATE 
pg_catalog.pg_class\n"
                                                          "SET relfrozenxid = 
'%u', relminmxid = '%u'\n"
                                                          "WHERE oid = %u;\n",
-                                                         
atooid(PQgetvalue(lo_res, i, i_relfrozenxid)),
-                                                         
atooid(PQgetvalue(lo_res, i, i_relminmxid)),
-                                                         
atooid(PQgetvalue(lo_res, i, i_oid)));
+                                                         
atooid(PQgetvalue(lo_res, i, ii_relfrozenxid)),
+                                                         
atooid(PQgetvalue(lo_res, i, ii_relminmxid)),
+                                                         
atooid(PQgetvalue(lo_res, i, ii_oid)));
 
-                       oid = atooid(PQgetvalue(lo_res, i, i_oid));
-                       relfilenumber = atooid(PQgetvalue(lo_res, i, 
i_relfilenode));
+                       oid = atooid(PQgetvalue(lo_res, i, ii_oid));
+                       relfilenumber = atooid(PQgetvalue(lo_res, i, 
ii_relfilenode));
 
                        if (oid == LargeObjectRelationId)
                                appendPQExpBuffer(loOutQry,
@@ -7081,21 +7081,21 @@ getConstraints(Archive *fout, TableInfo tblinfo[], int 
numTables)
        appendPQExpBufferChar(tbloids, '{');
        for (int i = 0; i < numTables; i++)
        {
-               TableInfo  *tbinfo = &tblinfo[i];
+               TableInfo  *tinfo = &tblinfo[i];
 
                /*
                 * For partitioned tables, foreign keys have no triggers so 
they must
                 * be included anyway in case some foreign keys are defined.
                 */
-               if ((!tbinfo->hastriggers &&
-                        tbinfo->relkind != RELKIND_PARTITIONED_TABLE) ||
-                       !(tbinfo->dobj.dump & DUMP_COMPONENT_DEFINITION))
+               if ((!tinfo->hastriggers &&
+                        tinfo->relkind != RELKIND_PARTITIONED_TABLE) ||
+                       !(tinfo->dobj.dump & DUMP_COMPONENT_DEFINITION))
                        continue;
 
                /* OK, we need info for this table */
                if (tbloids->len > 1)   /* do we have more than the '{'? */
                        appendPQExpBufferChar(tbloids, ',');
-               appendPQExpBuffer(tbloids, "%u", tbinfo->dobj.catId.oid);
+               appendPQExpBuffer(tbloids, "%u", tinfo->dobj.catId.oid);
        }
        appendPQExpBufferChar(tbloids, '}');
 
@@ -16800,7 +16800,7 @@ dumpSequence(Archive *fout, const TableInfo *tbinfo)
         */
        if (OidIsValid(tbinfo->owning_tab) && !tbinfo->is_identity_sequence)
        {
-               TableInfo  *owning_tab = findTableByOid(tbinfo->owning_tab);
+               owning_tab = findTableByOid(tbinfo->owning_tab);
 
                if (owning_tab == NULL)
                        pg_fatal("failed sanity check, parent table with OID %u 
of sequence with OID %u not found",

Reply via email to