On Wed, Aug 24, 2022 at 10:47:31PM +1200, David Rowley wrote:
> I really think #2s should be done last. I'm not as comfortable with
> the renaming and we might want to discuss tactics on that. We could
> either opt to rename the shadowed or shadowing variable, or both.  If
> we rename the shadowing variable, then pending patches or forward
> patches could use the wrong variable.  If we rename the shadowed
> variable then it's not impossible that backpatching could go wrong
> where the new code intends to reference the outer variable using the
> newly named variable, but when that's backpatched it uses the variable
> with the same name in the inner scope.  Renaming both would make the
> problem more obvious.

The most *likely* outcome of renaming the *outer* variable is that
*every* cherry-pick involving that variable would fails to compile,
which is an *obvious* failure (good) but also kind of annoying if it
could've worked fine if it weren't renamed.  I think most of the renames
should be applied to the inner var, because it's of narrower scope, and
more likely to cause a conflict (good) rather than appearing to apply
cleanly but then misbehave.  But it seems reasonable to consider
renaming both if the inner scope is longer than a handful of lines.

> Would you be able to write a patch for #4. I'll do #5 now. You could
> do a draft patch for #2 as well, but I think it should be committed
> last, if we decide it's a good move to make. It may be worth having
> the discussion about if we actually want to run
> -Wshadow=compatible-local as a standard build flag before we rename
> anything.

I'm afraid the discussion about default flags would distract from fixing
the individual warnings, which itself preclude usability of the flag by
individual developers, or buildfarm, even as a local setting.

It can't be enabled until *all* the shadows are gone, due to -Werror on
the buildfarm and cirrusci.  Unless perhaps we used -Wno-error=shadow.
I suppose we're only talking about enabling it for gcc?

The biggest benefit is if we fix *all* the local shadow vars, since that
allows someone to make use of the option, and thereby avoiding future
such issues.  Enabling the option could conceivably avoid issues
cherry-picking into back branch - if an inner var is re-introduced
during conflict resolution, then a new warning would be issued, and
hopefully the developer would look more closely.

Would you check if any of these changes are good enough ?

-- 
Justin
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index 5887166061a..8a06b73948d 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -6256,45 +6256,45 @@ FreezeMultiXactId(MultiXactId multi, uint16 t_infomask,
                return multi;
        }
 
        /*
         * Do a more thorough second pass over the multi to figure out which
         * member XIDs actually need to be kept.  Checking the precise status of
         * individual members might even show that we don't need to keep 
anything.
         */
        nnewmembers = 0;
        newmembers = palloc(sizeof(MultiXactMember) * nmembers);
        has_lockers = false;
        update_xid = InvalidTransactionId;
        update_committed = false;
        temp_xid_out = *mxid_oldest_xid_out;    /* init for FRM_RETURN_IS_MULTI 
*/
 
        for (i = 0; i < nmembers; i++)
        {
                /*
                 * Determine whether to keep this member or ignore it.
                 */
                if (ISUPDATE_from_mxstatus(members[i].status))
                {
-                       TransactionId xid = members[i].xid;
+                       xid = members[i].xid;
 
                        Assert(TransactionIdIsValid(xid));
                        if (TransactionIdPrecedes(xid, relfrozenxid))
                                ereport(ERROR,
                                                
(errcode(ERRCODE_DATA_CORRUPTED),
                                                 errmsg_internal("found update 
xid %u from before relfrozenxid %u",
                                                                                
 xid, relfrozenxid)));
 
                        /*
                         * It's an update; should we keep it?  If the 
transaction is known
                         * aborted or crashed then it's okay to ignore it, 
otherwise not.
                         * Note that an updater older than cutoff_xid cannot 
possibly be
                         * committed, because HeapTupleSatisfiesVacuum would 
have returned
                         * HEAPTUPLE_DEAD and we would not be trying to freeze 
the tuple.
                         *
                         * As with all tuple visibility routines, it's critical 
to test
                         * TransactionIdIsInProgress before 
TransactionIdDidCommit,
                         * because of race conditions explained in detail in
                         * heapam_visibility.c.
                         */
                        if (TransactionIdIsCurrentTransactionId(xid) ||
                                TransactionIdIsInProgress(xid))
diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c
index 9b03579e6e0..9a83ebf3231 100644
--- a/src/backend/catalog/heap.c
+++ b/src/backend/catalog/heap.c
@@ -1799,57 +1799,57 @@ heap_drop_with_catalog(Oid relid)
        rel = relation_open(relid, AccessExclusiveLock);
 
        /*
         * There can no longer be anyone *else* touching the relation, but we
         * might still have open queries or cursors, or pending trigger events, 
in
         * our own session.
         */
        CheckTableNotInUse(rel, "DROP TABLE");
 
        /*
         * This effectively deletes all rows in the table, and may be done in a
         * serializable transaction.  In that case we must record a rw-conflict 
in
         * to this transaction from each transaction holding a predicate lock on
         * the table.
         */
        CheckTableForSerializableConflictIn(rel);
 
        /*
         * Delete pg_foreign_table tuple first.
         */
        if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE)
        {
-               Relation        rel;
-               HeapTuple       tuple;
+               Relation        pg_foreign_table;
+               HeapTuple       foreigntuple;
 
-               rel = table_open(ForeignTableRelationId, RowExclusiveLock);
+               pg_foreign_table = table_open(ForeignTableRelationId, 
RowExclusiveLock);
 
-               tuple = SearchSysCache1(FOREIGNTABLEREL, 
ObjectIdGetDatum(relid));
-               if (!HeapTupleIsValid(tuple))
+               foreigntuple = SearchSysCache1(FOREIGNTABLEREL, 
ObjectIdGetDatum(relid));
+               if (!HeapTupleIsValid(foreigntuple))
                        elog(ERROR, "cache lookup failed for foreign table %u", 
relid);
 
-               CatalogTupleDelete(rel, &tuple->t_self);
+               CatalogTupleDelete(pg_foreign_table, &foreigntuple->t_self);
 
-               ReleaseSysCache(tuple);
-               table_close(rel, RowExclusiveLock);
+               ReleaseSysCache(foreigntuple);
+               table_close(pg_foreign_table, RowExclusiveLock);
        }
 
        /*
         * If a partitioned table, delete the pg_partitioned_table tuple.
         */
        if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)
                RemovePartitionKeyByRelId(relid);
 
        /*
         * If the relation being dropped is the default partition itself,
         * invalidate its entry in pg_partitioned_table.
         */
        if (relid == defaultPartOid)
                update_default_partition_oid(parentOid, InvalidOid);
 
        /*
         * Schedule unlinking of the relation's physical files at commit.
         */
        if (RELKIND_HAS_STORAGE(rel->rd_rel->relkind))
                RelationDropStorage(rel);
 
        /* ensure that stats are dropped if transaction commits */
diff --git a/src/backend/commands/publicationcmds.c 
b/src/backend/commands/publicationcmds.c
index 8b574b86c47..f9366f588fb 100644
--- a/src/backend/commands/publicationcmds.c
+++ b/src/backend/commands/publicationcmds.c
@@ -87,70 +87,70 @@ parse_publication_options(ParseState *pstate,
 {
        ListCell   *lc;
 
        *publish_given = false;
        *publish_via_partition_root_given = false;
 
        /* defaults */
        pubactions->pubinsert = true;
        pubactions->pubupdate = true;
        pubactions->pubdelete = true;
        pubactions->pubtruncate = true;
        *publish_via_partition_root = false;
 
        /* Parse options */
        foreach(lc, options)
        {
                DefElem    *defel = (DefElem *) lfirst(lc);
 
                if (strcmp(defel->defname, "publish") == 0)
                {
                        char       *publish;
                        List       *publish_list;
-                       ListCell   *lc;
+                       ListCell   *lc2;
 
                        if (*publish_given)
                                errorConflictingDefElem(defel, pstate);
 
                        /*
                         * If publish option was given only the explicitly 
listed actions
                         * should be published.
                         */
                        pubactions->pubinsert = false;
                        pubactions->pubupdate = false;
                        pubactions->pubdelete = false;
                        pubactions->pubtruncate = false;
 
                        *publish_given = true;
                        publish = defGetString(defel);
 
                        if (!SplitIdentifierString(publish, ',', &publish_list))
                                ereport(ERROR,
                                                (errcode(ERRCODE_SYNTAX_ERROR),
                                                 errmsg("invalid list syntax 
for \"publish\" option")));
 
                        /* Process the option list. */
-                       foreach(lc, publish_list)
+                       foreach(lc2, publish_list)
                        {
-                               char       *publish_opt = (char *) lfirst(lc);
+                               char       *publish_opt = (char *) lfirst(lc2);
 
                                if (strcmp(publish_opt, "insert") == 0)
                                        pubactions->pubinsert = true;
                                else if (strcmp(publish_opt, "update") == 0)
                                        pubactions->pubupdate = true;
                                else if (strcmp(publish_opt, "delete") == 0)
                                        pubactions->pubdelete = true;
                                else if (strcmp(publish_opt, "truncate") == 0)
                                        pubactions->pubtruncate = true;
                                else
                                        ereport(ERROR,
                                                        
(errcode(ERRCODE_SYNTAX_ERROR),
                                                         errmsg("unrecognized 
\"publish\" value: \"%s\"", publish_opt)));
                        }
                }
                else if (strcmp(defel->defname, "publish_via_partition_root") 
== 0)
                {
                        if (*publish_via_partition_root_given)
                                errorConflictingDefElem(defel, pstate);
                        *publish_via_partition_root_given = true;
                        *publish_via_partition_root = defGetBoolean(defel);
                }
diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index dacc989d855..7535b86bcae 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -10204,45 +10204,45 @@ CloneFkReferencing(List **wqueue, Relation parentRel, 
Relation partRel)
 
        foreach(cell, clone)
        {
                Oid                     parentConstrOid = lfirst_oid(cell);
                Form_pg_constraint constrForm;
                Relation        pkrel;
                HeapTuple       tuple;
                int                     numfks;
                AttrNumber      conkey[INDEX_MAX_KEYS];
                AttrNumber      mapped_conkey[INDEX_MAX_KEYS];
                AttrNumber      confkey[INDEX_MAX_KEYS];
                Oid                     conpfeqop[INDEX_MAX_KEYS];
                Oid                     conppeqop[INDEX_MAX_KEYS];
                Oid                     conffeqop[INDEX_MAX_KEYS];
                int                     numfkdelsetcols;
                AttrNumber      confdelsetcols[INDEX_MAX_KEYS];
                Constraint *fkconstraint;
                bool            attached;
                Oid                     indexOid;
                Oid                     constrOid;
                ObjectAddress address,
                                        referenced;
-               ListCell   *cell;
+               ListCell   *lc;
                Oid                     insertTriggerOid,
                                        updateTriggerOid;
 
                tuple = SearchSysCache1(CONSTROID, parentConstrOid);
                if (!HeapTupleIsValid(tuple))
                        elog(ERROR, "cache lookup failed for constraint %u",
                                 parentConstrOid);
                constrForm = (Form_pg_constraint) GETSTRUCT(tuple);
 
                /* Don't clone constraints whose parents are being cloned */
                if (list_member_oid(clone, constrForm->conparentid))
                {
                        ReleaseSysCache(tuple);
                        continue;
                }
 
                /*
                 * Need to prevent concurrent deletions.  If pkrel is a 
partitioned
                 * relation, that means to lock all partitions.
                 */
                pkrel = table_open(constrForm->confrelid, 
ShareRowExclusiveLock);
                if (pkrel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)
@@ -10257,47 +10257,47 @@ CloneFkReferencing(List **wqueue, Relation parentRel, 
Relation partRel)
 
                /*
                 * Get the "check" triggers belonging to the constraint to pass 
as
                 * parent OIDs for similar triggers that will be created on the
                 * partition in addFkRecurseReferencing().  They are also 
passed to
                 * tryAttachPartitionForeignKey() below to simply assign as 
parents to
                 * the partition's existing "check" triggers, that is, if the
                 * corresponding constraints is deemed attachable to the parent
                 * constraint.
                 */
                GetForeignKeyCheckTriggers(trigrel, constrForm->oid,
                                                                   
constrForm->confrelid, constrForm->conrelid,
                                                                   
&insertTriggerOid, &updateTriggerOid);
 
                /*
                 * Before creating a new constraint, see whether any existing 
FKs are
                 * fit for the purpose.  If one is, attach the parent 
constraint to
                 * it, and don't clone anything.  This way we avoid the 
expensive
                 * verification step and don't end up with a duplicate FK, and 
we
                 * don't need to recurse to partitions for this constraint.
                 */
                attached = false;
-               foreach(cell, partFKs)
+               foreach(lc, partFKs)
                {
-                       ForeignKeyCacheInfo *fk = 
lfirst_node(ForeignKeyCacheInfo, cell);
+                       ForeignKeyCacheInfo *fk = 
lfirst_node(ForeignKeyCacheInfo, lc);
 
                        if (tryAttachPartitionForeignKey(fk,
                                                                                
         RelationGetRelid(partRel),
                                                                                
         parentConstrOid,
                                                                                
         numfks,
                                                                                
         mapped_conkey,
                                                                                
         confkey,
                                                                                
         conpfeqop,
                                                                                
         insertTriggerOid,
                                                                                
         updateTriggerOid,
                                                                                
         trigrel))
                        {
                                attached = true;
                                table_close(pkrel, NoLock);
                                break;
                        }
                }
                if (attached)
                {
                        ReleaseSysCache(tuple);
                        continue;
                }
diff --git a/src/backend/commands/trigger.c b/src/backend/commands/trigger.c
index 7661e004a93..b0a9e7d7664 100644
--- a/src/backend/commands/trigger.c
+++ b/src/backend/commands/trigger.c
@@ -1707,47 +1707,47 @@ renametrig_partition(Relation tgrel, Oid partitionId, 
Oid parentTriggerOid,
                                                                NULL, 1, &key);
        while (HeapTupleIsValid(tuple = systable_getnext(tgscan)))
        {
                Form_pg_trigger tgform = (Form_pg_trigger) GETSTRUCT(tuple);
                Relation        partitionRel;
 
                if (tgform->tgparentid != parentTriggerOid)
                        continue;                       /* not our trigger */
 
                partitionRel = table_open(partitionId, NoLock);
 
                /* Rename the trigger on this partition */
                renametrig_internal(tgrel, partitionRel, tuple, newname, 
expected_name);
 
                /* And if this relation is partitioned, recurse to its 
partitions */
                if (partitionRel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)
                {
                        PartitionDesc partdesc = 
RelationGetPartitionDesc(partitionRel,
                                                                                
                                          true);
 
                        for (int i = 0; i < partdesc->nparts; i++)
                        {
-                               Oid                     partitionId = 
partdesc->oids[i];
+                               Oid                     partid = 
partdesc->oids[i];
 
-                               renametrig_partition(tgrel, partitionId, 
tgform->oid, newname,
+                               renametrig_partition(tgrel, partid, 
tgform->oid, newname,
                                                                         
NameStr(tgform->tgname));
                        }
                }
                table_close(partitionRel, NoLock);
 
                /* There should be at most one matching tuple */
                break;
        }
        systable_endscan(tgscan);
 }
 
 /*
  * EnableDisableTrigger()
  *
  *     Called by ALTER TABLE ENABLE/DISABLE [ REPLICA | ALWAYS ] TRIGGER
  *     to change 'tgenabled' field for the specified trigger(s)
  *
  * rel: relation to process (caller must hold suitable lock on it)
  * tgname: trigger to process, or NULL to scan all triggers
  * fires_when: new value for tgenabled field. In addition to generic
  *                        enablement/disablement, this also defines when the 
trigger
  *                        should be fired in session replication roles.
diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index 933c3049016..736082c8fb3 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -3168,45 +3168,44 @@ hashagg_reset_spill_state(AggState *aggstate)
 AggState *
 ExecInitAgg(Agg *node, EState *estate, int eflags)
 {
        AggState   *aggstate;
        AggStatePerAgg peraggs;
        AggStatePerTrans pertransstates;
        AggStatePerGroup *pergroups;
        Plan       *outerPlan;
        ExprContext *econtext;
        TupleDesc       scanDesc;
        int                     max_aggno;
        int                     max_transno;
        int                     numaggrefs;
        int                     numaggs;
        int                     numtrans;
        int                     phase;
        int                     phaseidx;
        ListCell   *l;
        Bitmapset  *all_grouped_cols = NULL;
        int                     numGroupingSets = 1;
        int                     numPhases;
        int                     numHashes;
-       int                     i = 0;
        int                     j = 0;
        bool            use_hashing = (node->aggstrategy == AGG_HASHED ||
                                                           node->aggstrategy == 
AGG_MIXED);
 
        /* check for unsupported flags */
        Assert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK)));
 
        /*
         * create state structure
         */
        aggstate = makeNode(AggState);
        aggstate->ss.ps.plan = (Plan *) node;
        aggstate->ss.ps.state = estate;
        aggstate->ss.ps.ExecProcNode = ExecAgg;
 
        aggstate->aggs = NIL;
        aggstate->numaggs = 0;
        aggstate->numtrans = 0;
        aggstate->aggstrategy = node->aggstrategy;
        aggstate->aggsplit = node->aggsplit;
        aggstate->maxsets = 0;
        aggstate->projected_set = -1;
@@ -3259,45 +3258,45 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
        aggstate->numphases = numPhases;
 
        aggstate->aggcontexts = (ExprContext **)
                palloc0(sizeof(ExprContext *) * numGroupingSets);
 
        /*
         * Create expression contexts.  We need three or more, one for
         * per-input-tuple processing, one for per-output-tuple processing, one
         * for all the hashtables, and one for each grouping set.  The per-tuple
         * memory context of the per-grouping-set ExprContexts (aggcontexts)
         * replaces the standalone memory context formerly used to hold 
transition
         * values.  We cheat a little by using ExecAssignExprContext() to build
         * all of them.
         *
         * NOTE: the details of what is stored in aggcontexts and what is stored
         * in the regular per-query memory context are driven by a simple
         * decision: we want to reset the aggcontext at group boundaries (if not
         * hashing) and in ExecReScanAgg to recover no-longer-wanted space.
         */
        ExecAssignExprContext(estate, &aggstate->ss.ps);
        aggstate->tmpcontext = aggstate->ss.ps.ps_ExprContext;
 
-       for (i = 0; i < numGroupingSets; ++i)
+       for (int i = 0; i < numGroupingSets; ++i)
        {
                ExecAssignExprContext(estate, &aggstate->ss.ps);
                aggstate->aggcontexts[i] = aggstate->ss.ps.ps_ExprContext;
        }
 
        if (use_hashing)
                aggstate->hashcontext = CreateWorkExprContext(estate);
 
        ExecAssignExprContext(estate, &aggstate->ss.ps);
 
        /*
         * Initialize child nodes.
         *
         * If we are doing a hashed aggregation then the child plan does not 
need
         * to handle REWIND efficiently; see ExecReScanAgg.
         */
        if (node->aggstrategy == AGG_HASHED)
                eflags &= ~EXEC_FLAG_REWIND;
        outerPlan = outerPlan(node);
        outerPlanState(aggstate) = ExecInitNode(outerPlan, estate, eflags);
 
        /*
@@ -3399,75 +3398,76 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
                Agg                *aggnode;
                Sort       *sortnode;
 
                if (phaseidx > 0)
                {
                        aggnode = list_nth_node(Agg, node->chain, phaseidx - 1);
                        sortnode = castNode(Sort, outerPlan(aggnode));
                }
                else
                {
                        aggnode = node;
                        sortnode = NULL;
                }
 
                Assert(phase <= 1 || sortnode);
 
                if (aggnode->aggstrategy == AGG_HASHED
                        || aggnode->aggstrategy == AGG_MIXED)
                {
                        AggStatePerPhase phasedata = &aggstate->phases[0];
                        AggStatePerHash perhash;
                        Bitmapset  *cols = NULL;
+                       int                     setno = phasedata->numsets++;
 
                        Assert(phase == 0);
-                       i = phasedata->numsets++;
-                       perhash = &aggstate->perhash[i];
+                       perhash = &aggstate->perhash[setno];
 
                        /* phase 0 always points to the "real" Agg in the hash 
case */
                        phasedata->aggnode = node;
                        phasedata->aggstrategy = node->aggstrategy;
 
                        /* but the actual Agg node representing this hash is 
saved here */
                        perhash->aggnode = aggnode;
 
-                       phasedata->gset_lengths[i] = perhash->numCols = 
aggnode->numCols;
+                       phasedata->gset_lengths[setno] = perhash->numCols = 
aggnode->numCols;
 
                        for (j = 0; j < aggnode->numCols; ++j)
                                cols = bms_add_member(cols, 
aggnode->grpColIdx[j]);
 
-                       phasedata->grouped_cols[i] = cols;
+                       phasedata->grouped_cols[setno] = cols;
 
                        all_grouped_cols = bms_add_members(all_grouped_cols, 
cols);
                        continue;
                }
                else
                {
                        AggStatePerPhase phasedata = &aggstate->phases[++phase];
                        int                     num_sets;
 
                        phasedata->numsets = num_sets = 
list_length(aggnode->groupingSets);
 
                        if (num_sets)
                        {
+                               int i;
                                phasedata->gset_lengths = palloc(num_sets * 
sizeof(int));
                                phasedata->grouped_cols = palloc(num_sets * 
sizeof(Bitmapset *));
 
                                i = 0;
                                foreach(l, aggnode->groupingSets)
                                {
                                        int                     current_length 
= list_length(lfirst(l));
                                        Bitmapset  *cols = NULL;
 
                                        /* planner forces this to be correct */
                                        for (j = 0; j < current_length; ++j)
                                                cols = bms_add_member(cols, 
aggnode->grpColIdx[j]);
 
                                        phasedata->grouped_cols[i] = cols;
                                        phasedata->gset_lengths[i] = 
current_length;
 
                                        ++i;
                                }
 
                                all_grouped_cols = 
bms_add_members(all_grouped_cols,
                                                                                
                   phasedata->grouped_cols[0]);
                        }
@@ -3515,71 +3515,73 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
                                /* and for all grouped columns, unless already 
computed */
                                if (phasedata->eqfunctions[aggnode->numCols - 
1] == NULL)
                                {
                                        phasedata->eqfunctions[aggnode->numCols 
- 1] =
                                                execTuplesMatchPrepare(scanDesc,
                                                                                
           aggnode->numCols,
                                                                                
           aggnode->grpColIdx,
                                                                                
           aggnode->grpOperators,
                                                                                
           aggnode->grpCollations,
                                                                                
           (PlanState *) aggstate);
                                }
                        }
 
                        phasedata->aggnode = aggnode;
                        phasedata->aggstrategy = aggnode->aggstrategy;
                        phasedata->sortnode = sortnode;
                }
        }
 
        /*
         * Convert all_grouped_cols to a descending-order list.
         */
-       i = -1;
-       while ((i = bms_next_member(all_grouped_cols, i)) >= 0)
-               aggstate->all_grouped_cols = lcons_int(i, 
aggstate->all_grouped_cols);
+       {
+               int i = -1;
+               while ((i = bms_next_member(all_grouped_cols, i)) >= 0)
+                       aggstate->all_grouped_cols = lcons_int(i, 
aggstate->all_grouped_cols);
+       }
 
        /*
         * Set up aggregate-result storage in the output expr context, and also
         * allocate my private per-agg working storage
         */
        econtext = aggstate->ss.ps.ps_ExprContext;
        econtext->ecxt_aggvalues = (Datum *) palloc0(sizeof(Datum) * numaggs);
        econtext->ecxt_aggnulls = (bool *) palloc0(sizeof(bool) * numaggs);
 
        peraggs = (AggStatePerAgg) palloc0(sizeof(AggStatePerAggData) * 
numaggs);
        pertransstates = (AggStatePerTrans) 
palloc0(sizeof(AggStatePerTransData) * numtrans);
 
        aggstate->peragg = peraggs;
        aggstate->pertrans = pertransstates;
 
 
        aggstate->all_pergroups =
                (AggStatePerGroup *) palloc0(sizeof(AggStatePerGroup)
                                                                         * 
(numGroupingSets + numHashes));
        pergroups = aggstate->all_pergroups;
 
        if (node->aggstrategy != AGG_HASHED)
        {
-               for (i = 0; i < numGroupingSets; i++)
+               for (int i = 0; i < numGroupingSets; i++)
                {
                        pergroups[i] = (AggStatePerGroup) 
palloc0(sizeof(AggStatePerGroupData)
                                                                                
                          * numaggs);
                }
 
                aggstate->pergroups = pergroups;
                pergroups += numGroupingSets;
        }
 
        /*
         * Hashing can only appear in the initial phase.
         */
        if (use_hashing)
        {
                Plan       *outerplan = outerPlan(node);
                uint64          totalGroups = 0;
                int                     i;
 
                aggstate->hash_metacxt = 
AllocSetContextCreate(aggstate->ss.ps.state->es_query_cxt,
                                                                                
                           "HashAgg meta context",
                                                                                
                           ALLOCSET_DEFAULT_SIZES);
                aggstate->hash_spill_rslot = ExecInitExtraTupleSlot(estate, 
scanDesc,
diff --git a/src/backend/executor/spi.c b/src/backend/executor/spi.c
index 29bc26669b0..a250a33f8cb 100644
--- a/src/backend/executor/spi.c
+++ b/src/backend/executor/spi.c
@@ -2465,45 +2465,44 @@ _SPI_execute_plan(SPIPlanPtr plan, const 
SPIExecuteOptions *options,
         * there be only one query.
         */
        if (options->must_return_tuples && plan->plancache_list == NIL)
                ereport(ERROR,
                                (errcode(ERRCODE_SYNTAX_ERROR),
                                 errmsg("empty query does not return tuples")));
 
        foreach(lc1, plan->plancache_list)
        {
                CachedPlanSource *plansource = (CachedPlanSource *) lfirst(lc1);
                List       *stmt_list;
                ListCell   *lc2;
 
                spicallbackarg.query = plansource->query_string;
 
                /*
                 * If this is a one-shot plan, we still need to do parse 
analysis.
                 */
                if (plan->oneshot)
                {
                        RawStmt    *parsetree = plansource->raw_parse_tree;
                        const char *src = plansource->query_string;
-                       List       *stmt_list;
 
                        /*
                         * Parameter datatypes are driven by parserSetup hook 
if provided,
                         * otherwise we use the fixed parameter list.
                         */
                        if (parsetree == NULL)
                                stmt_list = NIL;
                        else if (plan->parserSetup != NULL)
                        {
                                Assert(plan->nargs == 0);
                                stmt_list = 
pg_analyze_and_rewrite_withcb(parsetree,
                                                                                
                                  src,
                                                                                
                                  plan->parserSetup,
                                                                                
                                  plan->parserSetupArg,
                                                                                
                                  _SPI_current->queryEnv);
                        }
                        else
                        {
                                stmt_list = 
pg_analyze_and_rewrite_fixedparams(parsetree,
                                                                                
                                           src,
                                                                                
                                           plan->argtypes,
                                                                                
                                           plan->nargs,
diff --git a/src/backend/optimizer/path/costsize.c 
b/src/backend/optimizer/path/costsize.c
index 75acea149c7..74adc4f3946 100644
--- a/src/backend/optimizer/path/costsize.c
+++ b/src/backend/optimizer/path/costsize.c
@@ -2526,48 +2526,48 @@ cost_append(AppendPath *apath, PlannerInfo *root)
        apath->path.rows = 0;
 
        if (apath->subpaths == NIL)
                return;
 
        if (!apath->path.parallel_aware)
        {
                List       *pathkeys = apath->path.pathkeys;
 
                if (pathkeys == NIL)
                {
                        Path       *subpath = (Path *) 
linitial(apath->subpaths);
 
                        /*
                         * For an unordered, non-parallel-aware Append we take 
the startup
                         * cost as the startup cost of the first subpath.
                         */
                        apath->path.startup_cost = subpath->startup_cost;
 
                        /* Compute rows and costs as sums of subplan rows and 
costs. */
                        foreach(l, apath->subpaths)
                        {
-                               Path       *subpath = (Path *) lfirst(l);
+                               Path       *sub = (Path *) lfirst(l);
 
-                               apath->path.rows += subpath->rows;
-                               apath->path.total_cost += subpath->total_cost;
+                               apath->path.rows += sub->rows;
+                               apath->path.total_cost += sub->total_cost;
                        }
                }
                else
                {
                        /*
                         * For an ordered, non-parallel-aware Append we take 
the startup
                         * cost as the sum of the subpath startup costs.  This 
ensures
                         * that we don't underestimate the startup cost when a 
query's
                         * LIMIT is such that several of the children have to 
be run to
                         * satisfy it.  This might be overkill --- another 
plausible hack
                         * would be to take the Append's startup cost as the 
maximum of
                         * the child startup costs.  But we don't want to risk 
believing
                         * that an ORDER BY LIMIT query can be satisfied at 
small cost
                         * when the first child has small startup cost but 
later ones
                         * don't.  (If we had the ability to deal with 
nonlinear cost
                         * interpolation for partial retrievals, we would not 
need to be
                         * so conservative about this.)
                         *
                         * This case is also different from the above in that 
we have to
                         * account for possibly injecting sorts into subpaths 
that aren't
                         * natively ordered.
                         */
diff --git a/src/backend/optimizer/path/tidpath.c 
b/src/backend/optimizer/path/tidpath.c
index 279ca1f5b44..23194d6e007 100644
--- a/src/backend/optimizer/path/tidpath.c
+++ b/src/backend/optimizer/path/tidpath.c
@@ -286,48 +286,48 @@ TidQualFromRestrictInfoList(PlannerInfo *root, List 
*rlist, RelOptInfo *rel)
                {
                        ListCell   *j;
 
                        /*
                         * We must be able to extract a CTID condition from 
every
                         * sub-clause of an OR, or we can't use it.
                         */
                        foreach(j, ((BoolExpr *) rinfo->orclause)->args)
                        {
                                Node       *orarg = (Node *) lfirst(j);
                                List       *sublist;
 
                                /* OR arguments should be ANDs or 
sub-RestrictInfos */
                                if (is_andclause(orarg))
                                {
                                        List       *andargs = ((BoolExpr *) 
orarg)->args;
 
                                        /* Recurse in case there are sub-ORs */
                                        sublist = 
TidQualFromRestrictInfoList(root, andargs, rel);
                                }
                                else
                                {
-                                       RestrictInfo *rinfo = 
castNode(RestrictInfo, orarg);
+                                       RestrictInfo *list = 
castNode(RestrictInfo, orarg);
 
-                                       
Assert(!restriction_is_or_clause(rinfo));
-                                       sublist = TidQualFromRestrictInfo(root, 
rinfo, rel);
+                                       Assert(!restriction_is_or_clause(list));
+                                       sublist = TidQualFromRestrictInfo(root, 
list, rel);
                                }
 
                                /*
                                 * If nothing found in this arm, we can't do 
anything with
                                 * this OR clause.
                                 */
                                if (sublist == NIL)
                                {
                                        rlst = NIL; /* forget anything we had */
                                        break;          /* out of loop over OR 
args */
                                }
 
                                /*
                                 * OK, continue constructing implicitly-OR'ed 
result list.
                                 */
                                rlst = list_concat(rlst, sublist);
                        }
                }
                else
                {
                        /* Not an OR clause, so handle base cases */
                        rlst = TidQualFromRestrictInfo(root, rinfo, rel);
diff --git a/src/backend/optimizer/prep/prepunion.c 
b/src/backend/optimizer/prep/prepunion.c
index 71052c841d7..f97c2f5256c 100644
--- a/src/backend/optimizer/prep/prepunion.c
+++ b/src/backend/optimizer/prep/prepunion.c
@@ -639,47 +639,47 @@ generate_union_paths(SetOperationStmt *op, PlannerInfo 
*root,
 
        add_path(result_rel, path);
 
        /*
         * Estimate number of groups.  For now we just assume the output is 
unique
         * --- this is certainly true for the UNION case, and we want worst-case
         * estimates anyway.
         */
        result_rel->rows = path->rows;
 
        /*
         * Now consider doing the same thing using the partial paths plus Append
         * plus Gather.
         */
        if (partial_paths_valid)
        {
                Path       *ppath;
                int                     parallel_workers = 0;
 
                /* Find the highest number of workers requested for any 
subpath. */
                foreach(lc, partial_pathlist)
                {
-                       Path       *path = lfirst(lc);
+                       Path       *partial_path = lfirst(lc);
 
-                       parallel_workers = Max(parallel_workers, 
path->parallel_workers);
+                       parallel_workers = Max(parallel_workers, 
partial_path->parallel_workers);
                }
                Assert(parallel_workers > 0);
 
                /*
                 * If the use of parallel append is permitted, always request 
at least
                 * log2(# of children) paths.  We assume it can be useful to 
have
                 * extra workers in this case because they will be spread out 
across
                 * the children.  The precise formula is just a guess; see
                 * add_paths_to_append_rel.
                 */
                if (enable_parallel_append)
                {
                        parallel_workers = Max(parallel_workers,
                                                                   
pg_leftmost_one_pos32(list_length(partial_pathlist)) + 1);
                        parallel_workers = Min(parallel_workers,
                                                                   
max_parallel_workers_per_gather);
                }
                Assert(parallel_workers > 0);
 
                ppath = (Path *)
                        create_append_path(root, result_rel, NIL, 
partial_pathlist,
                                                           NIL, NULL,
diff --git a/src/backend/optimizer/util/paramassign.c 
b/src/backend/optimizer/util/paramassign.c
index 8e2d4bf5158..933460989b3 100644
--- a/src/backend/optimizer/util/paramassign.c
+++ b/src/backend/optimizer/util/paramassign.c
@@ -418,93 +418,93 @@ replace_nestloop_param_placeholdervar(PlannerInfo *root, 
PlaceHolderVar *phv)
  * while planning the subquery.  So we need not modify the subplan or the
  * PlannerParamItems here.  What we do need to do is add entries to
  * root->curOuterParams to signal the parent nestloop plan node that it must
  * provide these values.  This differs from replace_nestloop_param_var in
  * that the PARAM_EXEC slots to use have already been determined.
  *
  * Note that we also use root->curOuterRels as an implicit parameter for
  * sanity checks.
  */
 void
 process_subquery_nestloop_params(PlannerInfo *root, List *subplan_params)
 {
        ListCell   *lc;
 
        foreach(lc, subplan_params)
        {
                PlannerParamItem *pitem = lfirst_node(PlannerParamItem, lc);
 
                if (IsA(pitem->item, Var))
                {
                        Var                *var = (Var *) pitem->item;
                        NestLoopParam *nlp;
-                       ListCell   *lc;
+                       ListCell   *lc2;
 
                        /* If not from a nestloop outer rel, complain */
                        if (!bms_is_member(var->varno, root->curOuterRels))
                                elog(ERROR, "non-LATERAL parameter required by 
subquery");
 
                        /* Is this param already listed in 
root->curOuterParams? */
-                       foreach(lc, root->curOuterParams)
+                       foreach(lc2, root->curOuterParams)
                        {
-                               nlp = (NestLoopParam *) lfirst(lc);
+                               nlp = (NestLoopParam *) lfirst(lc2);
                                if (nlp->paramno == pitem->paramId)
                                {
                                        Assert(equal(var, nlp->paramval));
                                        /* Present, so nothing to do */
                                        break;
                                }
                        }
-                       if (lc == NULL)
+                       if (lc2 == NULL)
                        {
                                /* No, so add it */
                                nlp = makeNode(NestLoopParam);
                                nlp->paramno = pitem->paramId;
                                nlp->paramval = copyObject(var);
                                root->curOuterParams = 
lappend(root->curOuterParams, nlp);
                        }
                }
                else if (IsA(pitem->item, PlaceHolderVar))
                {
                        PlaceHolderVar *phv = (PlaceHolderVar *) pitem->item;
                        NestLoopParam *nlp;
-                       ListCell   *lc;
+                       ListCell   *lc2;
 
                        /* If not from a nestloop outer rel, complain */
                        if (!bms_is_subset(find_placeholder_info(root, 
phv)->ph_eval_at,
                                                           root->curOuterRels))
                                elog(ERROR, "non-LATERAL parameter required by 
subquery");
 
                        /* Is this param already listed in 
root->curOuterParams? */
-                       foreach(lc, root->curOuterParams)
+                       foreach(lc2, root->curOuterParams)
                        {
-                               nlp = (NestLoopParam *) lfirst(lc);
+                               nlp = (NestLoopParam *) lfirst(lc2);
                                if (nlp->paramno == pitem->paramId)
                                {
                                        Assert(equal(phv, nlp->paramval));
                                        /* Present, so nothing to do */
                                        break;
                                }
                        }
-                       if (lc == NULL)
+                       if (lc2 == NULL)
                        {
                                /* No, so add it */
                                nlp = makeNode(NestLoopParam);
                                nlp->paramno = pitem->paramId;
                                nlp->paramval = (Var *) copyObject(phv);
                                root->curOuterParams = 
lappend(root->curOuterParams, nlp);
                        }
                }
                else
                        elog(ERROR, "unexpected type of subquery parameter");
        }
 }
 
 /*
  * Identify any NestLoopParams that should be supplied by a NestLoop plan
  * node with the specified lefthand rels.  Remove them from the active
  * root->curOuterParams list and return them as the result list.
  */
 List *
 identify_current_nestloop_params(PlannerInfo *root, Relids leftrelids)
 {
        List       *result;
diff --git a/src/backend/parser/parse_clause.c 
b/src/backend/parser/parse_clause.c
index b85fbebd00e..53a17ac3f6a 100644
--- a/src/backend/parser/parse_clause.c
+++ b/src/backend/parser/parse_clause.c
@@ -520,49 +520,49 @@ transformRangeFunction(ParseState *pstate, RangeFunction 
*r)
                 * likely expecting an un-tweaked function call.
                 *
                 * Note: the transformation changes a non-schema-qualified 
unnest()
                 * function name into schema-qualified pg_catalog.unnest().  
This
                 * choice is also a bit debatable, but it seems reasonable to 
force
                 * use of built-in unnest() when we make this transformation.
                 */
                if (IsA(fexpr, FuncCall))
                {
                        FuncCall   *fc = (FuncCall *) fexpr;
 
                        if (list_length(fc->funcname) == 1 &&
                                strcmp(strVal(linitial(fc->funcname)), 
"unnest") == 0 &&
                                list_length(fc->args) > 1 &&
                                fc->agg_order == NIL &&
                                fc->agg_filter == NULL &&
                                fc->over == NULL &&
                                !fc->agg_star &&
                                !fc->agg_distinct &&
                                !fc->func_variadic &&
                                coldeflist == NIL)
                        {
-                               ListCell   *lc;
+                               ListCell   *lc2;
 
-                               foreach(lc, fc->args)
+                               foreach(lc2, fc->args)
                                {
-                                       Node       *arg = (Node *) lfirst(lc);
+                                       Node       *arg = (Node *) lfirst(lc2);
                                        FuncCall   *newfc;
 
                                        last_srf = pstate->p_last_srf;
 
                                        newfc = 
makeFuncCall(SystemFuncName("unnest"),
                                                                                
 list_make1(arg),
                                                                                
 COERCE_EXPLICIT_CALL,
                                                                                
 fc->location);
 
                                        newfexpr = transformExpr(pstate, (Node 
*) newfc,
                                                                                
         EXPR_KIND_FROM_FUNCTION);
 
                                        /* nodeFunctionscan.c requires SRFs to 
be at top level */
                                        if (pstate->p_last_srf != last_srf &&
                                                pstate->p_last_srf != newfexpr)
                                                ereport(ERROR,
                                                                
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
                                                                 
errmsg("set-returning functions must appear at top level of FROM"),
                                                                 
parser_errposition(pstate,
                                                                                
                        exprLocation(pstate->p_last_srf))));
 
                                        funcexprs = lappend(funcexprs, 
newfexpr);
diff --git a/src/backend/partitioning/partbounds.c 
b/src/backend/partitioning/partbounds.c
index 091d6e886b6..2720a2508cb 100644
--- a/src/backend/partitioning/partbounds.c
+++ b/src/backend/partitioning/partbounds.c
@@ -4300,46 +4300,45 @@ get_qual_for_range(Relation parent, PartitionBoundSpec 
*spec,
        int                     i,
                                j;
        PartitionRangeDatum *ldatum,
                           *udatum;
        PartitionKey key = RelationGetPartitionKey(parent);
        Expr       *keyCol;
        Const      *lower_val,
                           *upper_val;
        List       *lower_or_arms,
                           *upper_or_arms;
        int                     num_or_arms,
                                current_or_arm;
        ListCell   *lower_or_start_datum,
                           *upper_or_start_datum;
        bool            need_next_lower_arm,
                                need_next_upper_arm;
 
        if (spec->is_default)
        {
                List       *or_expr_args = NIL;
                PartitionDesc pdesc = RelationGetPartitionDesc(parent, false);
                Oid                *inhoids = pdesc->oids;
-               int                     nparts = pdesc->nparts,
-                                       i;
+               int                     nparts = pdesc->nparts;
 
                for (i = 0; i < nparts; i++)
                {
                        Oid                     inhrelid = inhoids[i];
                        HeapTuple       tuple;
                        Datum           datum;
                        bool            isnull;
                        PartitionBoundSpec *bspec;
 
                        tuple = SearchSysCache1(RELOID, inhrelid);
                        if (!HeapTupleIsValid(tuple))
                                elog(ERROR, "cache lookup failed for relation 
%u", inhrelid);
 
                        datum = SysCacheGetAttr(RELOID, tuple,
                                                                        
Anum_pg_class_relpartbound,
                                                                        
&isnull);
                        if (isnull)
                                elog(ERROR, "null relpartbound for relation 
%u", inhrelid);
 
                        bspec = (PartitionBoundSpec *)
                                stringToNode(TextDatumGetCString(datum));
                        if (!IsA(bspec, PartitionBoundSpec))
diff --git a/src/backend/partitioning/partprune.c 
b/src/backend/partitioning/partprune.c
index bf9fe5b7aaf..91b300f4dba 100644
--- a/src/backend/partitioning/partprune.c
+++ b/src/backend/partitioning/partprune.c
@@ -2270,49 +2270,48 @@ 
match_clause_to_partition_key(GeneratePruningStepsContext *context,
                         */
                        if (arrexpr->multidims)
                                return PARTCLAUSE_UNSUPPORTED;
 
                        /*
                         * Otherwise, we can just use the list of element 
values.
                         */
                        elem_exprs = arrexpr->elements;
                }
                else
                {
                        /* Give up on any other clause types. */
                        return PARTCLAUSE_UNSUPPORTED;
                }
 
                /*
                 * Now generate a list of clauses, one for each array element, 
of the
                 * form leftop saop_op elem_expr
                 */
                elem_clauses = NIL;
                foreach(lc1, elem_exprs)
                {
-                       Expr       *rightop = (Expr *) lfirst(lc1),
-                                          *elem_clause;
+                       Expr       *elem_clause;
 
                        elem_clause = make_opclause(saop_op, BOOLOID, false,
-                                                                               
leftop, rightop,
+                                                                               
leftop, lfirst(lc1),
                                                                                
InvalidOid, saop_coll);
                        elem_clauses = lappend(elem_clauses, elem_clause);
                }
 
                /*
                 * If we have an ANY clause and multiple elements, now turn the 
list
                 * of clauses into an OR expression.
                 */
                if (saop->useOr && list_length(elem_clauses) > 1)
                        elem_clauses = list_make1(makeBoolExpr(OR_EXPR, 
elem_clauses, -1));
 
                /* Finally, generate steps */
                *clause_steps = gen_partprune_steps_internal(context, 
elem_clauses);
                if (context->contradictory)
                        return PARTCLAUSE_MATCH_CONTRADICT;
                else if (*clause_steps == NIL)
                        return PARTCLAUSE_UNSUPPORTED;  /* step generation 
failed */
                return PARTCLAUSE_MATCH_STEPS;
        }
        else if (IsA(clause, NullTest))
        {
                NullTest   *nulltest = (NullTest *) clause;
diff --git a/src/backend/replication/logical/reorderbuffer.c 
b/src/backend/replication/logical/reorderbuffer.c
index 89cf9f9389c..8ac78a6cf38 100644
--- a/src/backend/replication/logical/reorderbuffer.c
+++ b/src/backend/replication/logical/reorderbuffer.c
@@ -2301,45 +2301,44 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, 
ReorderBufferTXN *txn,
                                                 * previous tuple's toast 
chunks.
                                                 */
                                                
Assert(change->data.tp.clear_toast_afterwards);
                                                ReorderBufferToastReset(rb, 
txn);
 
                                                /* We don't need this record 
anymore. */
                                                ReorderBufferReturnChange(rb, 
specinsert, true);
                                                specinsert = NULL;
                                        }
                                        break;
 
                                case REORDER_BUFFER_CHANGE_TRUNCATE:
                                        {
                                                int                     i;
                                                int                     nrelids 
= change->data.truncate.nrelids;
                                                int                     
nrelations = 0;
                                                Relation   *relations;
 
                                                relations = palloc0(nrelids * 
sizeof(Relation));
                                                for (i = 0; i < nrelids; i++)
                                                {
                                                        Oid                     
relid = change->data.truncate.relids[i];
-                                                       Relation        
relation;
 
                                                        relation = 
RelationIdGetRelation(relid);
 
                                                        if 
(!RelationIsValid(relation))
                                                                elog(ERROR, 
"could not open relation with OID %u", relid);
 
                                                        if 
(!RelationIsLogicallyLogged(relation))
                                                                continue;
 
                                                        relations[nrelations++] 
= relation;
                                                }
 
                                                /* Apply the truncate. */
                                                ReorderBufferApplyTruncate(rb, 
txn, nrelations,
                                                                                
                   relations, change,
                                                                                
                   streaming);
 
                                                for (i = 0; i < nrelations; i++)
                                                        
RelationClose(relations[i]);
 
                                                break;
                                        }
diff --git a/src/backend/statistics/dependencies.c 
b/src/backend/statistics/dependencies.c
index bf698c1fc3f..744bc512b65 100644
--- a/src/backend/statistics/dependencies.c
+++ b/src/backend/statistics/dependencies.c
@@ -1673,45 +1673,44 @@ dependencies_clauselist_selectivity(PlannerInfo *root,
                 *
                 * XXX We have to do this even when there are no expressions in
                 * clauses, otherwise find_strongest_dependency may fail for 
stats
                 * with expressions (due to lookup of negative value in 
bitmap). So we
                 * need to at least filter out those dependencies. Maybe we 
could do
                 * it in a cheaper way (if there are no expr clauses, we can 
just
                 * discard all negative attnums without any lookups).
                 */
                if (unique_exprs_cnt > 0 || stat->exprs != NIL)
                {
                        int                     ndeps = 0;
 
                        for (i = 0; i < deps->ndeps; i++)
                        {
                                bool            skip = false;
                                MVDependency *dep = deps->deps[i];
                                int                     j;
 
                                for (j = 0; j < dep->nattributes; j++)
                                {
                                        int                     idx;
                                        Node       *expr;
-                                       int                     k;
                                        AttrNumber      unique_attnum = 
InvalidAttrNumber;
                                        AttrNumber      attnum;
 
                                        /* undo the per-statistics offset */
                                        attnum = dep->attributes[j];
 
                                        /*
                                         * For regular attributes we can simply 
check if it
                                         * matches any clause. If there's no 
matching clause, we
                                         * can just ignore it. We need to 
offset the attnum
                                         * though.
                                         */
                                        if 
(AttrNumberIsForUserDefinedAttr(attnum))
                                        {
                                                dep->attributes[j] = attnum + 
attnum_offset;
 
                                                if 
(!bms_is_member(dep->attributes[j], clauses_attnums))
                                                {
                                                        skip = true;
                                                        break;
                                                }
 
@@ -1721,53 +1720,53 @@ dependencies_clauselist_selectivity(PlannerInfo *root,
                                        /*
                                         * the attnum should be a valid system 
attnum (-1, -2,
                                         * ...)
                                         */
                                        Assert(AttributeNumberIsValid(attnum));
 
                                        /*
                                         * For expressions, we need to do two 
translations. First
                                         * we have to translate the negative 
attnum to index in
                                         * the list of expressions (in the 
statistics object).
                                         * Then we need to see if there's a 
matching clause. The
                                         * index of the unique expression 
determines the attnum
                                         * (and we offset it).
                                         */
                                        idx = -(1 + attnum);
 
                                        /* Is the expression index is valid? */
                                        Assert((idx >= 0) && (idx < 
list_length(stat->exprs)));
 
                                        expr = (Node *) list_nth(stat->exprs, 
idx);
 
                                        /* try to find the expression in the 
unique list */
-                                       for (k = 0; k < unique_exprs_cnt; k++)
+                                       for (int m = 0; m < unique_exprs_cnt; 
m++)
                                        {
                                                /*
                                                 * found a matching unique 
expression, use the attnum
                                                 * (derived from index of the 
unique expression)
                                                 */
-                                               if (equal(unique_exprs[k], 
expr))
+                                               if (equal(unique_exprs[m], 
expr))
                                                {
-                                                       unique_attnum = -(k + 
1) + attnum_offset;
+                                                       unique_attnum = -(m + 
1) + attnum_offset;
                                                        break;
                                                }
                                        }
 
                                        /*
                                         * Found no matching expression, so we 
can simply skip
                                         * this dependency, because there's no 
chance it will be
                                         * fully covered.
                                         */
                                        if (unique_attnum == InvalidAttrNumber)
                                        {
                                                skip = true;
                                                break;
                                        }
 
                                        /* otherwise remap it to the new attnum 
*/
                                        dep->attributes[j] = unique_attnum;
                                }
 
                                /* if found a matching dependency, keep it */
                                if (!skip)
                                {
diff --git a/src/backend/utils/adt/numutils.c b/src/backend/utils/adt/numutils.c
index cc3f95d3990..834ec0b5882 100644
--- a/src/backend/utils/adt/numutils.c
+++ b/src/backend/utils/adt/numutils.c
@@ -429,48 +429,48 @@ pg_ltoa(int32 value, char *a)
  * same.  Caller must ensure that a points to at least MAXINT8LEN bytes.
  */
 int
 pg_ulltoa_n(uint64 value, char *a)
 {
        int                     olength,
                                i = 0;
        uint32          value2;
 
        /* Degenerate case */
        if (value == 0)
        {
                *a = '0';
                return 1;
        }
 
        olength = decimalLength64(value);
 
        /* Compute the result string. */
        while (value >= 100000000)
        {
                const uint64 q = value / 100000000;
-               uint32          value2 = (uint32) (value - 100000000 * q);
+               uint32          value3 = (uint32) (value - 100000000 * q);
 
-               const uint32 c = value2 % 10000;
-               const uint32 d = value2 / 10000;
+               const uint32 c = value3 % 10000;
+               const uint32 d = value3 / 10000;
                const uint32 c0 = (c % 100) << 1;
                const uint32 c1 = (c / 100) << 1;
                const uint32 d0 = (d % 100) << 1;
                const uint32 d1 = (d / 100) << 1;
 
                char       *pos = a + olength - i;
 
                value = q;
 
                memcpy(pos - 2, DIGIT_TABLE + c0, 2);
                memcpy(pos - 4, DIGIT_TABLE + c1, 2);
                memcpy(pos - 6, DIGIT_TABLE + d0, 2);
                memcpy(pos - 8, DIGIT_TABLE + d1, 2);
                i += 8;
        }
 
        /* Switch to 32-bit for speed */
        value2 = (uint32) value;
 
        if (value2 >= 10000)
        {
                const uint32 c = value2 - 10000 * (value2 / 10000);
diff --git a/src/backend/utils/adt/partitionfuncs.c 
b/src/backend/utils/adt/partitionfuncs.c
index 109dc8023e1..a45c3f9d48a 100644
--- a/src/backend/utils/adt/partitionfuncs.c
+++ b/src/backend/utils/adt/partitionfuncs.c
@@ -219,29 +219,29 @@ pg_partition_ancestors(PG_FUNCTION_ARGS)
 
                funcctx = SRF_FIRSTCALL_INIT();
 
                if (!check_rel_can_be_partition(relid))
                        SRF_RETURN_DONE(funcctx);
 
                oldcxt = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
 
                ancestors = get_partition_ancestors(relid);
                ancestors = lcons_oid(relid, ancestors);
 
                /* The only state we need is the ancestors list */
                funcctx->user_fctx = (void *) ancestors;
 
                MemoryContextSwitchTo(oldcxt);
        }
 
        funcctx = SRF_PERCALL_SETUP();
        ancestors = (List *) funcctx->user_fctx;
 
        if (funcctx->call_cntr < list_length(ancestors))
        {
-               Oid                     relid = list_nth_oid(ancestors, 
funcctx->call_cntr);
+               Oid                     nextrel = list_nth_oid(ancestors, 
funcctx->call_cntr);
 
-               SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(relid));
+               SRF_RETURN_NEXT(funcctx, ObjectIdGetDatum(nextrel));
        }
 
        SRF_RETURN_DONE(funcctx);
 }
diff --git a/src/backend/utils/adt/ruleutils.c 
b/src/backend/utils/adt/ruleutils.c
index 9959f6910e9..ad2d1a3e4ec 100644
--- a/src/backend/utils/adt/ruleutils.c
+++ b/src/backend/utils/adt/ruleutils.c
@@ -8091,47 +8091,45 @@ get_parameter(Param *param, deparse_context *context)
        if (param->paramkind == PARAM_EXTERN && context->namespaces != NIL)
        {
                dpns = llast(context->namespaces);
                if (dpns->argnames &&
                        param->paramid > 0 &&
                        param->paramid <= dpns->numargs)
                {
                        char       *argname = dpns->argnames[param->paramid - 
1];
 
                        if (argname)
                        {
                                bool            should_qualify = false;
                                ListCell   *lc;
 
                                /*
                                 * Qualify the parameter name if there are any 
other deparse
                                 * namespaces with range tables.  This avoids 
qualifying in
                                 * trivial cases like "RETURN a + b", but makes 
it safe in all
                                 * other cases.
                                 */
                                foreach(lc, context->namespaces)
                                {
-                                       deparse_namespace *dpns = lfirst(lc);
-
-                                       if (dpns->rtable_names != NIL)
+                                       if (((deparse_namespace *) 
lfirst(lc))->rtable_names != NIL)
                                        {
                                                should_qualify = true;
                                                break;
                                        }
                                }
                                if (should_qualify)
                                {
                                        appendStringInfoString(context->buf, 
quote_identifier(dpns->funcname));
                                        appendStringInfoChar(context->buf, '.');
                                }
 
                                appendStringInfoString(context->buf, 
quote_identifier(argname));
                                return;
                        }
                }
        }
 
        /*
         * Not PARAM_EXEC, or couldn't find referent: just print $N.
         */
        appendStringInfo(context->buf, "$%d", param->paramid);
 }
diff --git a/src/pl/plpgsql/src/pl_funcs.c b/src/pl/plpgsql/src/pl_funcs.c
index 93d9cef06ba..8d7b6b58c05 100644
--- a/src/pl/plpgsql/src/pl_funcs.c
+++ b/src/pl/plpgsql/src/pl_funcs.c
@@ -1628,51 +1628,50 @@ plpgsql_dumptree(PLpgSQL_function *func)
                                        {
                                                printf("                        
          DEFAULT ");
                                                dump_expr(var->default_val);
                                                printf("\n");
                                        }
                                        if (var->cursor_explicit_expr != NULL)
                                        {
                                                if (var->cursor_explicit_argrow 
>= 0)
                                                        printf("                
                  CURSOR argument row %d\n", var->cursor_explicit_argrow);
 
                                                printf("                        
          CURSOR IS ");
                                                
dump_expr(var->cursor_explicit_expr);
                                                printf("\n");
                                        }
                                        if (var->promise != 
PLPGSQL_PROMISE_NONE)
                                                printf("                        
          PROMISE %d\n",
                                                           (int) var->promise);
                                }
                                break;
                        case PLPGSQL_DTYPE_ROW:
                                {
                                        PLpgSQL_row *row = (PLpgSQL_row *) d;
-                                       int                     i;
 
                                        printf("ROW %-16s fields", 
row->refname);
-                                       for (i = 0; i < row->nfields; i++)
+                                       for (int j = 0; j < row->nfields; j++)
                                        {
-                                               printf(" %s=var %d", 
row->fieldnames[i],
-                                                          row->varnos[i]);
+                                               printf(" %s=var %d", 
row->fieldnames[j],
+                                                          row->varnos[j]);
                                        }
                                        printf("\n");
                                }
                                break;
                        case PLPGSQL_DTYPE_REC:
                                printf("REC %-16s typoid %u\n",
                                           ((PLpgSQL_rec *) d)->refname,
                                           ((PLpgSQL_rec *) d)->rectypeid);
                                if (((PLpgSQL_rec *) d)->isconst)
                                        printf("                                
  CONSTANT\n");
                                if (((PLpgSQL_rec *) d)->notnull)
                                        printf("                                
  NOT NULL\n");
                                if (((PLpgSQL_rec *) d)->default_val != NULL)
                                {
                                        printf("                                
  DEFAULT ");
                                        dump_expr(((PLpgSQL_rec *) 
d)->default_val);
                                        printf("\n");
                                }
                                break;
                        case PLPGSQL_DTYPE_RECFIELD:
                                printf("RECFIELD %-16s of REC %d\n",
                                           ((PLpgSQL_recfield *) d)->fieldname,

Reply via email to