I gave this a very quick look; I don't claim to understand it or anything, but I thought these trivial cleanups worthwhile. The only non-cosmetic thing is changing order of arguments to the SOn_printf() calls in 0008; I think they are contrary to what the comment says.
I don't propose to commit 0003 of course, since it's not our policy; that's just to allow running pgindent sanely, which gives you 0004 (though my local pgindent has an unrelated fix). And after that you notice the issue that 0005 fixes. I did notice that show_incremental_sort_group_info() seems to be doing things in a hard way, or something. I got there because it throws this warning: /pgsql/source/master/src/backend/commands/explain.c: In function 'show_incremental_sort_group_info': /pgsql/source/master/src/backend/commands/explain.c:2766:39: warning: passing argument 2 of 'lappend' discards 'const' qualifier from pointer target type [-Wdiscarded-qualifiers] methodNames = lappend(methodNames, sortMethodName); ^~~~~~~~~~~~~~ In file included from /pgsql/source/master/src/include/access/xact.h:20, from /pgsql/source/master/src/backend/commands/explain.c:16: /pgsql/source/master/src/include/nodes/pg_list.h:509:14: note: expected 'void *' but argument is of type 'const char *' extern List *lappend(List *list, void *datum); ^~~~~~~ /pgsql/source/master/src/backend/commands/explain.c:2766:39: warning: passing 'const char *' to parameter of type 'void *' discards qualifiers [-Wincompatible-pointer-types-discards-qualifiers] methodNames = lappend(methodNames, sortMethodName); ^~~~~~~~~~~~~~ /pgsql/source/master/src/include/nodes/pg_list.h:509:40: note: passing argument to parameter 'datum' here extern List *lappend(List *list, void *datum); ^ 1 warning generated. (Eh, it's funny that GCC reports two warnings about the same line, and then says there's one warning.) I suppose you could silence this by adding pstrdup(), and then use list_free_deep (you have to put the sortMethodName declaration in the inner scope for that, but seems fine). Or maybe there's a clever way around it. But I hesitate to send a patch for that because I think the whole function is written by handling text and the other outputs completely separately -- but looking for example show_modifytable_info() it seems you can do ExplainOpenGroup, ExplainPropertyText, ExplainPropertyList etc in all explain output modes, and those routines will care about emitting the data in the correct format, without having the show_incremental_sort_group_info function duplicate everything. HTH. I would really like to get this patch done for pg13. -- Álvaro Herrera https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>From f0e70563197ffd04c2afc7f2221d489561267669 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera <alvhe...@alvh.no-ip.org> Date: Thu, 12 Mar 2020 17:43:48 -0300 Subject: [PATCH 1/8] fix typo --- src/backend/executor/nodeIncrementalSort.c | 6 +++--- src/include/executor/nodeIncrementalSort.h | 4 +--- 2 files changed, 4 insertions(+), 6 deletions(-) diff --git a/src/backend/executor/nodeIncrementalSort.c b/src/backend/executor/nodeIncrementalSort.c index e6f749a798..44c6c17fc6 100644 --- a/src/backend/executor/nodeIncrementalSort.c +++ b/src/backend/executor/nodeIncrementalSort.c @@ -1,6 +1,6 @@ /*------------------------------------------------------------------------- * - * nodeIncremenalSort.c + * nodeIncrementalSort.c * Routines to handle incremental sorting of relations. * * DESCRIPTION @@ -49,12 +49,12 @@ * it can start producing rows early, before sorting the whole dataset, * which is a significant benefit especially for queries with LIMIT. * - * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * * IDENTIFICATION - * src/backend/executor/nodeIncremenalSort.c + * src/backend/executor/nodeIncrementalSort.c * *------------------------------------------------------------------------- */ diff --git a/src/include/executor/nodeIncrementalSort.h b/src/include/executor/nodeIncrementalSort.h index 90d7a81711..3113989272 100644 --- a/src/include/executor/nodeIncrementalSort.h +++ b/src/include/executor/nodeIncrementalSort.h @@ -2,9 +2,7 @@ * * nodeIncrementalSort.h * - * - * - * Portions Copyright (c) 1996-2018, PostgreSQL Global Development Group + * Portions Copyright (c) 1996-2020, PostgreSQL Global Development Group * Portions Copyright (c) 1994, Regents of the University of California * * src/include/executor/nodeIncrementalSort.h -- 2.20.1
>From bd11b01a1322108194750ff81ee317e1e77fc048 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera <alvhe...@alvh.no-ip.org> Date: Thu, 12 Mar 2020 18:26:05 -0300 Subject: [PATCH 2/8] fix another typo --- src/backend/executor/nodeIncrementalSort.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/backend/executor/nodeIncrementalSort.c b/src/backend/executor/nodeIncrementalSort.c index 44c6c17fc6..e2cb9511ba 100644 --- a/src/backend/executor/nodeIncrementalSort.c +++ b/src/backend/executor/nodeIncrementalSort.c @@ -639,7 +639,7 @@ ExecIncrementalSort(PlanState *pstate) slot = ExecProcNode(outerNode); /* - * When the outer node can't provide us anymore tuples, then we + * When the outer node can't provide us any more tuples, then we * can sort the current group and return those tuples. */ if (TupIsNull(slot)) -- 2.20.1
>From ad873efb35d7ea45b270739880ec2ee6d4eafefb Mon Sep 17 00:00:00 2001 From: Alvaro Herrera <alvhe...@alvh.no-ip.org> Date: Thu, 12 Mar 2020 17:51:43 -0300 Subject: [PATCH 3/8] typedefs additions --- src/tools/pgindent/typedefs.list | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list index e216de9570..c34014660d 100644 --- a/src/tools/pgindent/typedefs.list +++ b/src/tools/pgindent/typedefs.list @@ -1,3 +1,12 @@ +ExplainWorkersState +IncrementalSortState +IncrementalSortGroupInfo +IncrementalSort +IncrementalSortInfo +SharedIncrementalSortInfo +IncrementalSortExecutionStatus +IncrementalSortPath +PresortedKeyData ABITVEC ACCESS_ALLOWED_ACE ACL -- 2.20.1
>From d36a342906cb9002b9689c225b1412599c7b7aec Mon Sep 17 00:00:00 2001 From: Alvaro Herrera <alvhe...@alvh.no-ip.org> Date: Thu, 12 Mar 2020 17:51:56 -0300 Subject: [PATCH 4/8] pgindent --- src/backend/commands/explain.c | 49 +-- src/backend/executor/execAmi.c | 5 +- src/backend/executor/execProcnode.c | 2 +- src/backend/executor/nodeIncrementalSort.c | 384 +++++++++++---------- src/backend/nodes/copyfuncs.c | 6 +- src/backend/nodes/outfuncs.c | 4 +- src/backend/optimizer/path/allpaths.c | 10 +- src/backend/optimizer/path/costsize.c | 40 +-- src/backend/optimizer/path/pathkeys.c | 10 +- src/backend/optimizer/plan/createplan.c | 48 +-- src/backend/optimizer/plan/planner.c | 30 +- src/backend/optimizer/plan/setrefs.c | 2 +- src/backend/optimizer/util/pathnode.c | 28 +- src/backend/utils/misc/guc.c | 2 +- src/backend/utils/sort/tuplesort.c | 26 +- src/include/executor/nodeIncrementalSort.h | 2 +- src/include/nodes/execnodes.h | 34 +- src/include/optimizer/cost.h | 14 +- src/include/optimizer/pathnode.h | 10 +- src/include/optimizer/paths.h | 2 +- 20 files changed, 364 insertions(+), 344 deletions(-) diff --git a/src/backend/commands/explain.c b/src/backend/commands/explain.c index 8262c54e6a..2d6bf75521 100644 --- a/src/backend/commands/explain.c +++ b/src/backend/commands/explain.c @@ -83,7 +83,7 @@ static void show_upper_qual(List *qual, const char *qlabel, static void show_sort_keys(SortState *sortstate, List *ancestors, ExplainState *es); static void show_incremental_sort_keys(IncrementalSortState *incrsortstate, - List *ancestors, ExplainState *es); + List *ancestors, ExplainState *es); static void show_merge_append_keys(MergeAppendState *mstate, List *ancestors, ExplainState *es); static void show_agg_keys(AggState *astate, List *ancestors, @@ -714,7 +714,7 @@ ExplainPrintPlan(ExplainState *es, QueryDesc *queryDesc) * further down in the plan tree. */ ps = queryDesc->planstate; - if (IsA(ps, GatherState) &&((Gather *) ps->plan)->invisible) + if (IsA(ps, GatherState) && ((Gather *) ps->plan)->invisible) { ps = outerPlanState(ps); es->hide_workers = true; @@ -2211,7 +2211,7 @@ show_scan_qual(List *qual, const char *qlabel, { bool useprefix; - useprefix = (IsA(planstate->plan, SubqueryScan) ||es->verbose); + useprefix = (IsA(planstate->plan, SubqueryScan) || es->verbose); show_qual(qual, qlabel, planstate, ancestors, useprefix, es); } @@ -2704,12 +2704,12 @@ show_sort_info(SortState *sortstate, ExplainState *es) static void show_incremental_sort_group_info(IncrementalSortGroupInfo *groupInfo, - const char *groupLabel, ExplainState *es) + const char *groupLabel, ExplainState *es) { const char *sortMethodName; const char *spaceTypeName; - ListCell *methodCell; - int methodCount = list_length(groupInfo->sortMethods); + ListCell *methodCell; + int methodCount = list_length(groupInfo->sortMethods); if (es->format == EXPLAIN_FORMAT_TEXT) { @@ -2727,7 +2727,8 @@ show_incremental_sort_group_info(IncrementalSortGroupInfo *groupInfo, if (groupInfo->maxMemorySpaceUsed > 0) { - long avgSpace = groupInfo->totalMemorySpaceUsed / groupInfo->groupCount; + long avgSpace = groupInfo->totalMemorySpaceUsed / groupInfo->groupCount; + spaceTypeName = tuplesort_space_type_name(SORT_SPACE_TYPE_MEMORY); appendStringInfo(es->str, " %s: %ldkB (avg), %ldkB (max)", spaceTypeName, avgSpace, @@ -2736,7 +2737,8 @@ show_incremental_sort_group_info(IncrementalSortGroupInfo *groupInfo, if (groupInfo->maxDiskSpaceUsed > 0) { - long avgSpace = groupInfo->totalDiskSpaceUsed / groupInfo->groupCount; + long avgSpace = groupInfo->totalDiskSpaceUsed / groupInfo->groupCount; + spaceTypeName = tuplesort_space_type_name(SORT_SPACE_TYPE_DISK); /* Add a semicolon separator only if memory stats were printed. */ if (groupInfo->maxMemorySpaceUsed > 0) @@ -2750,7 +2752,7 @@ show_incremental_sort_group_info(IncrementalSortGroupInfo *groupInfo, } else { - List *methodNames = NIL; + List *methodNames = NIL; StringInfoData groupName; initStringInfo(&groupName); @@ -2767,21 +2769,21 @@ show_incremental_sort_group_info(IncrementalSortGroupInfo *groupInfo, if (groupInfo->maxMemorySpaceUsed > 0) { - long avgSpace = groupInfo->totalMemorySpaceUsed / groupInfo->groupCount; + long avgSpace = groupInfo->totalMemorySpaceUsed / groupInfo->groupCount; ExplainPropertyInteger("Average Sort Space Used", "kB", avgSpace, es); ExplainPropertyInteger("Maximum Sort Space Used", "kB", - groupInfo->maxMemorySpaceUsed, es); + groupInfo->maxMemorySpaceUsed, es); spaceTypeName = tuplesort_space_type_name(SORT_SPACE_TYPE_MEMORY); ExplainPropertyText("Sort Space Type", spaceTypeName, es); } if (groupInfo->maxDiskSpaceUsed > 0) { - long avgSpace = groupInfo->totalDiskSpaceUsed / groupInfo->groupCount; + long avgSpace = groupInfo->totalDiskSpaceUsed / groupInfo->groupCount; ExplainPropertyInteger("Average Sort Space Used", "kB", avgSpace, es); ExplainPropertyInteger("Maximum Sort Space Used", "kB", - groupInfo->maxDiskSpaceUsed, es); + groupInfo->maxDiskSpaceUsed, es); spaceTypeName = tuplesort_space_type_name(SORT_SPACE_TYPE_MEMORY); ExplainPropertyText("Sort Space Type", spaceTypeName, es); } @@ -2818,23 +2820,24 @@ show_incremental_sort_info(IncrementalSortState *incrsortstate, for (n = 0; n < incrsortstate->shared_info->num_workers; n++) { IncrementalSortInfo *incsort_info = - &incrsortstate->shared_info->sinfo[n]; + &incrsortstate->shared_info->sinfo[n]; + /* * XXX: The previous version of the patch chcked: * fullsort_instrument->sortMethod == SORT_TYPE_STILL_IN_PROGRESS - * and continued if the condition was true (with the comment "ignore - * any unfilled slots"). - * I'm not convinced that makes sense since the same sort instrument - * can have been used multiple times, so the last time it being used - * being still in progress, doesn't seem to be relevant. - * Instead I'm now checking to see if the group count for each group - * info is 0. If both are 0, then we exclude the worker since it - * didn't contribute anything meaningful. + * and continued if the condition was true (with the comment + * "ignore any unfilled slots"). I'm not convinced that makes + * sense since the same sort instrument can have been used + * multiple times, so the last time it being used being still in + * progress, doesn't seem to be relevant. Instead I'm now checking + * to see if the group count for each group info is 0. If both are + * 0, then we exclude the worker since it didn't contribute + * anything meaningful. */ fullsortGroupInfo = &incsort_info->fullsortGroupInfo; prefixsortGroupInfo = &incsort_info->prefixsortGroupInfo; if (fullsortGroupInfo->groupCount == 0 && - prefixsortGroupInfo->groupCount == 0) + prefixsortGroupInfo->groupCount == 0) continue; if (!opened_group) diff --git a/src/backend/executor/execAmi.c b/src/backend/executor/execAmi.c index cba648a95e..e2154ba86a 100644 --- a/src/backend/executor/execAmi.c +++ b/src/backend/executor/execAmi.c @@ -566,9 +566,10 @@ ExecSupportsBackwardScan(Plan *node) return true; case T_IncrementalSort: + /* - * Unlike full sort, incremental sort keeps only a single group - * of tuples in memory, so it can't scan backwards. + * Unlike full sort, incremental sort keeps only a single group of + * tuples in memory, so it can't scan backwards. */ return false; diff --git a/src/backend/executor/execProcnode.c b/src/backend/executor/execProcnode.c index 8051f46a71..d15a86a706 100644 --- a/src/backend/executor/execProcnode.c +++ b/src/backend/executor/execProcnode.c @@ -859,7 +859,7 @@ ExecSetTupleBound(int64 tuples_needed, PlanState *child_node) * good idea to integrate this signaling with the parameter-change * mechanism. */ - IncrementalSortState *sortState = (IncrementalSortState *) child_node; + IncrementalSortState *sortState = (IncrementalSortState *) child_node; if (tuples_needed < 0) { diff --git a/src/backend/executor/nodeIncrementalSort.c b/src/backend/executor/nodeIncrementalSort.c index e2cb9511ba..4f6b438e7b 100644 --- a/src/backend/executor/nodeIncrementalSort.c +++ b/src/backend/executor/nodeIncrementalSort.c @@ -70,10 +70,10 @@ static void instrumentSortedGroup(PlanState *pstate, IncrementalSortGroupInfo *groupInfo, - Tuplesortstate *sortState) + Tuplesortstate *sortState) { IncrementalSortState *node = castNode(IncrementalSortState, pstate); - TuplesortInstrumentation sort_instr; + TuplesortInstrumentation sort_instr; groupInfo->groupCount++; @@ -96,7 +96,7 @@ instrumentSortedGroup(PlanState *pstate, IncrementalSortGroupInfo *groupInfo, if (!list_member_int(groupInfo->sortMethods, sort_instr.sortMethod)) groupInfo->sortMethods = lappend_int(groupInfo->sortMethods, - sort_instr.sortMethod); + sort_instr.sortMethod); /* Record shared stats if we're a parallel worker. */ if (node->shared_info && node->am_worker) @@ -105,7 +105,7 @@ instrumentSortedGroup(PlanState *pstate, IncrementalSortGroupInfo *groupInfo, Assert(ParallelWorkerNumber <= node->shared_info->num_workers); memcpy(&node->shared_info->sinfo[ParallelWorkerNumber], - &node->incsort_info, sizeof(IncrementalSortInfo)); + &node->incsort_info, sizeof(IncrementalSortInfo)); } } @@ -115,31 +115,31 @@ instrumentSortedGroup(PlanState *pstate, IncrementalSortGroupInfo *groupInfo, static void preparePresortedCols(IncrementalSortState *node) { - IncrementalSort *plannode = (IncrementalSort *) node->ss.ps.plan; - int presortedCols, - i; + IncrementalSort *plannode = (IncrementalSort *) node->ss.ps.plan; + int presortedCols, + i; Assert(IsA(plannode, IncrementalSort)); presortedCols = plannode->presortedCols; node->presorted_keys = (PresortedKeyData *) palloc(presortedCols * - sizeof(PresortedKeyData)); + sizeof(PresortedKeyData)); /* Pre-cache comparison functions for each pre-sorted key. */ for (i = 0; i < presortedCols; i++) { - Oid equalityOp, - equalityFunc; - PresortedKeyData *key; + Oid equalityOp, + equalityFunc; + PresortedKeyData *key; key = &node->presorted_keys[i]; key->attno = plannode->sort.sortColIdx[i]; equalityOp = get_equality_op_for_ordering_op( - plannode->sort.sortOperators[i], NULL); + plannode->sort.sortOperators[i], NULL); if (!OidIsValid(equalityOp)) elog(ERROR, "missing equality operator for ordering operator %u", - plannode->sort.sortOperators[i]); + plannode->sort.sortOperators[i]); equalityFunc = get_opcode(equalityOp); if (!OidIsValid(equalityFunc)) @@ -151,7 +151,7 @@ preparePresortedCols(IncrementalSortState *node) /* We can initialize the callinfo just once and re-use it */ key->fcinfo = palloc0(SizeForFunctionCallInfo(2)); InitFunctionCallInfoData(*key->fcinfo, &key->flinfo, 2, - plannode->sort.collations[i], NULL, NULL); + plannode->sort.collations[i], NULL, NULL); key->fcinfo->args[0].isnull = false; key->fcinfo->args[1].isnull = false; } @@ -167,27 +167,28 @@ preparePresortedCols(IncrementalSortState *node) static bool isCurrentGroup(IncrementalSortState *node, TupleTableSlot *pivot, TupleTableSlot *tuple) { - int presortedCols, i; + int presortedCols, + i; Assert(IsA(node->ss.ps.plan, IncrementalSort)); presortedCols = ((IncrementalSort *) node->ss.ps.plan)->presortedCols; /* - * That the input is sorted by keys * (0, ... n) implies that the tail keys - * are more likely to change. Therefore we do our comparison starting from - * the last pre-sorted column to optimize for early detection of + * That the input is sorted by keys * (0, ... n) implies that the tail + * keys are more likely to change. Therefore we do our comparison starting + * from the last pre-sorted column to optimize for early detection of * inequality and minimizing the number of function calls.. */ for (i = presortedCols - 1; i >= 0; i--) { - Datum datumA, - datumB, - result; - bool isnullA, - isnullB; - AttrNumber attno = node->presorted_keys[i].attno; - PresortedKeyData *key; + Datum datumA, + datumB, + result; + bool isnullA, + isnullB; + AttrNumber attno = node->presorted_keys[i].attno; + PresortedKeyData *key; datumA = slot_getattr(pivot, attno, &isnullA); datumB = slot_getattr(tuple, attno, &isnullB); @@ -243,13 +244,13 @@ static void switchToPresortedPrefixMode(PlanState *pstate) { IncrementalSortState *node = castNode(IncrementalSortState, pstate); - ScanDirection dir; - int64 nTuples = 0; - bool lastTuple = false; - bool firstTuple = true; - TupleDesc tupDesc; - PlanState *outerNode; - IncrementalSort *plannode = (IncrementalSort *) node->ss.ps.plan; + ScanDirection dir; + int64 nTuples = 0; + bool lastTuple = false; + bool firstTuple = true; + TupleDesc tupDesc; + PlanState *outerNode; + IncrementalSort *plannode = (IncrementalSort *) node->ss.ps.plan; dir = node->ss.ps.state->es_direction; outerNode = outerPlanState(node); @@ -258,22 +259,22 @@ switchToPresortedPrefixMode(PlanState *pstate) if (node->prefixsort_state == NULL) { Tuplesortstate *prefixsort_state; - int presortedCols = plannode->presortedCols; + int presortedCols = plannode->presortedCols; /* - * Optimize the sort by assuming the prefix columns are all equal - * and thus we only need to sort by any remaining columns. + * Optimize the sort by assuming the prefix columns are all equal and + * thus we only need to sort by any remaining columns. */ prefixsort_state = tuplesort_begin_heap( - tupDesc, - plannode->sort.numCols - presortedCols, - &(plannode->sort.sortColIdx[presortedCols]), - &(plannode->sort.sortOperators[presortedCols]), - &(plannode->sort.collations[presortedCols]), - &(plannode->sort.nullsFirst[presortedCols]), - work_mem, - NULL, - false); + tupDesc, + plannode->sort.numCols - presortedCols, + &(plannode->sort.sortColIdx[presortedCols]), + &(plannode->sort.sortOperators[presortedCols]), + &(plannode->sort.collations[presortedCols]), + &(plannode->sort.nullsFirst[presortedCols]), + work_mem, + NULL, + false); node->prefixsort_state = prefixsort_state; } else @@ -284,15 +285,15 @@ switchToPresortedPrefixMode(PlanState *pstate) /* * If the current node has a bound, then it's reasonably likely that a - * large prefix key group will benefit from bounded sort, so configure - * the tuplesort to allow for that optimization. + * large prefix key group will benefit from bounded sort, so configure the + * tuplesort to allow for that optimization. */ if (node->bounded) { SO1_printf("Setting bound on presorted prefix tuplesort to: %ld\n", - node->bound - node->bound_Done); + node->bound - node->bound_Done); tuplesort_set_bound(node->prefixsort_state, - node->bound - node->bound_Done); + node->bound - node->bound_Done); } for (;;) @@ -315,12 +316,12 @@ switchToPresortedPrefixMode(PlanState *pstate) else { tuplesort_gettupleslot(node->fullsort_state, - ScanDirectionIsForward(dir), - false, node->transfer_tuple, NULL); + ScanDirectionIsForward(dir), + false, node->transfer_tuple, NULL); /* - * If this is our first time through the loop, then we need to save the - * first tuple we get as our new group pivot. + * If this is our first time through the loop, then we need to + * save the first tuple we get as our new group pivot. */ if (TupIsNull(node->group_pivot)) ExecCopySlot(node->group_pivot, node->transfer_tuple); @@ -332,16 +333,18 @@ switchToPresortedPrefixMode(PlanState *pstate) } else { - /* The tuple isn't part of the current batch so we need to carry - * it over into the next set up tuples we transfer out of the full - * sort tuplesort into the presorted prefix tuplesort. We don't - * actually have to do anything special to save the tuple since - * we've already loaded it into the node->transfer_tuple slot, and, - * even though that slot points to memory inside the full sort - * tuplesort, we can't reset that tuplesort anyway until we've - * fully transferred out of its tuples, so this reference is safe. - * We do need to reset the group pivot tuple though since we've - * finished the current prefix key group. + /* + * The tuple isn't part of the current batch so we need to + * carry it over into the next set up tuples we transfer out + * of the full sort tuplesort into the presorted prefix + * tuplesort. We don't actually have to do anything special to + * save the tuple since we've already loaded it into the + * node->transfer_tuple slot, and, even though that slot + * points to memory inside the full sort tuplesort, we can't + * reset that tuplesort anyway until we've fully transferred + * out of its tuples, so this reference is safe. We do need to + * reset the group pivot tuple though since we've finished the + * current prefix key group. */ ExecClearTuple(node->group_pivot); break; @@ -351,6 +354,7 @@ switchToPresortedPrefixMode(PlanState *pstate) firstTuple = false; if (lastTuple) + /* * We retain the current group pivot tuple since we haven't yet * found the end of the current prefix key group. @@ -371,9 +375,9 @@ switchToPresortedPrefixMode(PlanState *pstate) if (lastTuple) { /* - * We've confirmed that all tuples remaining in the full sort batch - * is in the same prefix key group and moved all of those tuples into - * the presorted prefix tuplesort. Now we can save our pivot comparison + * We've confirmed that all tuples remaining in the full sort batch is + * in the same prefix key group and moved all of those tuples into the + * presorted prefix tuplesort. Now we can save our pivot comparison * tuple and continue fetching tuples from the outer execution node to * load into the presorted prefix tuplesort. */ @@ -381,9 +385,10 @@ switchToPresortedPrefixMode(PlanState *pstate) SO_printf("Setting execution_status to INCSORT_LOADPREFIXSORT (switchToPresortedPrefixMode)\n"); node->execution_status = INCSORT_LOADPREFIXSORT; - /* Make sure we clear the transfer tuple slot so that next time we - * encounter a large prefix key group we don't incorrectly assume - * we have a tuple carried over from the previous group. + /* + * Make sure we clear the transfer tuple slot so that next time we + * encounter a large prefix key group we don't incorrectly assume we + * have a tuple carried over from the previous group. */ ExecClearTuple(node->transfer_tuple); } @@ -391,28 +396,28 @@ switchToPresortedPrefixMode(PlanState *pstate) { /* * We finished a group but didn't consume all of the tuples from the - * full sort batch sorter, so we'll sort this batch, let the inner node - * read out all of those tuples, and then come back around to find - * another batch. + * full sort batch sorter, so we'll sort this batch, let the inner + * node read out all of those tuples, and then come back around to + * find another batch. */ SO1_printf("Sorting presorted prefix tuplesort with %ld tuples\n", nTuples); tuplesort_performsort(node->prefixsort_state); if (pstate->instrument != NULL) instrumentSortedGroup(pstate, - &node->incsort_info.prefixsortGroupInfo, - node->prefixsort_state); + &node->incsort_info.prefixsortGroupInfo, + node->prefixsort_state); if (node->bounded) { /* * If the current node has a bound, and we've already sorted n - * tuples, then the functional bound remaining is - * (original bound - n), so store the current number of processed - * tuples for use in configuring sorting bound. + * tuples, then the functional bound remaining is (original bound + * - n), so store the current number of processed tuples for use + * in configuring sorting bound. */ SO2_printf("Changing bound_Done from %ld to %ld\n", - Min(node->bound, node->bound_Done + nTuples), node->bound_Done); + Min(node->bound, node->bound_Done + nTuples), node->bound_Done); node->bound_Done = Min(node->bound, node->bound_Done + nTuples); } @@ -477,16 +482,16 @@ static TupleTableSlot * ExecIncrementalSort(PlanState *pstate) { IncrementalSortState *node = castNode(IncrementalSortState, pstate); - EState *estate; - ScanDirection dir; - Tuplesortstate *read_sortstate; - Tuplesortstate *fullsort_state; - TupleTableSlot *slot; - IncrementalSort *plannode = (IncrementalSort *) node->ss.ps.plan; - PlanState *outerNode; - TupleDesc tupDesc; - int64 nTuples = 0; - int64 minGroupSize; + EState *estate; + ScanDirection dir; + Tuplesortstate *read_sortstate; + Tuplesortstate *fullsort_state; + TupleTableSlot *slot; + IncrementalSort *plannode = (IncrementalSort *) node->ss.ps.plan; + PlanState *outerNode; + TupleDesc tupDesc; + int64 nTuples = 0; + int64 minGroupSize; CHECK_FOR_INTERRUPTS(); @@ -495,7 +500,7 @@ ExecIncrementalSort(PlanState *pstate) fullsort_state = node->fullsort_state; if (node->execution_status == INCSORT_READFULLSORT - || node->execution_status == INCSORT_READPREFIXSORT) + || node->execution_status == INCSORT_READPREFIXSORT) { /* * Return next tuple from the current sorted group set if available. @@ -505,35 +510,37 @@ ExecIncrementalSort(PlanState *pstate) slot = node->ss.ps.ps_ResultTupleSlot; if (tuplesort_gettupleslot(read_sortstate, ScanDirectionIsForward(dir), false, slot, NULL) || node->finished) + /* - * TODO: there isn't a good test case for the node->finished - * case directly, but lots of other stuff fails if it's not - * there. If the outer node will fail when trying to fetch - * too many tuples, then things break if that test isn't here. + * TODO: there isn't a good test case for the node->finished case + * directly, but lots of other stuff fails if it's not there. If + * the outer node will fail when trying to fetch too many tuples, + * then things break if that test isn't here. */ return slot; else if (node->n_fullsort_remaining > 0) { /* * When we transition to presorted prefix mode, we might have - * accumulated at least one additional prefix key group in the full - * sort tuplesort. The first call to switchToPresortedPrefixMode() - * pulled the one of those groups out, and we've returned those - * tuples to the inner node, but if we tuples remaining in that - * tuplesort (i.e., n_fullsort_remaining > 0) at this point we - * need to do that again. + * accumulated at least one additional prefix key group in the + * full sort tuplesort. The first call to + * switchToPresortedPrefixMode() pulled the one of those groups + * out, and we've returned those tuples to the inner node, but if + * we tuples remaining in that tuplesort (i.e., + * n_fullsort_remaining > 0) at this point we need to do that + * again. */ SO1_printf("Re-calling switchToPresortedPrefixMode() because n_fullsort_remaining is > 0 (%ld)\n", - node->n_fullsort_remaining); + node->n_fullsort_remaining); switchToPresortedPrefixMode(pstate); } else { /* - * If we don't have any already sorted tuples to read, and we're not - * in the middle of transitioning into presorted prefix sort mode, - * then it's time to start the process all over again by building - * new full sort group. + * If we don't have any already sorted tuples to read, and we're + * not in the middle of transitioning into presorted prefix sort + * mode, then it's time to start the process all over again by + * building new full sort group. */ SO_printf("Setting execution_status to INCSORT_LOADFULLSORT (n_fullsort_remaining > 0)\n"); node->execution_status = INCSORT_LOADFULLSORT; @@ -541,8 +548,8 @@ ExecIncrementalSort(PlanState *pstate) } /* - * Want to scan subplan in the forward direction while creating the - * sorted data. + * Want to scan subplan in the forward direction while creating the sorted + * data. */ estate->es_direction = ForwardScanDirection; @@ -570,15 +577,15 @@ ExecIncrementalSort(PlanState *pstate) * columns. */ fullsort_state = tuplesort_begin_heap( - tupDesc, - plannode->sort.numCols, - plannode->sort.sortColIdx, - plannode->sort.sortOperators, - plannode->sort.collations, - plannode->sort.nullsFirst, - work_mem, - NULL, - false); + tupDesc, + plannode->sort.numCols, + plannode->sort.sortColIdx, + plannode->sort.sortOperators, + plannode->sort.collations, + plannode->sort.nullsFirst, + work_mem, + NULL, + false); node->fullsort_state = fullsort_state; } else @@ -593,13 +600,13 @@ ExecIncrementalSort(PlanState *pstate) */ if (node->bounded) { - int64 currentBound = node->bound - node->bound_Done; + int64 currentBound = node->bound - node->bound_Done; /* * Bounded sort isn't likely to be a useful optimization for full * sort mode since we limit full sort mode to a relatively small - * number of tuples and tuplesort doesn't switch over to top-n heap - * sort anyway unless it hits (2 * bound) tuples. + * number of tuples and tuplesort doesn't switch over to top-n + * heap sort anyway unless it hits (2 * bound) tuples. */ if (currentBound < DEFAULT_MIN_GROUP_SIZE) tuplesort_set_bound(fullsort_state, currentBound); @@ -609,9 +616,11 @@ ExecIncrementalSort(PlanState *pstate) else minGroupSize = DEFAULT_MIN_GROUP_SIZE; - /* Because we have to read the next tuple to find out that we've + /* + * Because we have to read the next tuple to find out that we've * encountered a new prefix key group on subsequent groups we have to - * carry over that extra tuple and add it to the new group's sort here. + * carry over that extra tuple and add it to the new group's sort + * here. */ if (!TupIsNull(node->group_pivot)) { @@ -620,10 +629,10 @@ ExecIncrementalSort(PlanState *pstate) /* * We're in full sort mode accumulating a minimum number of tuples - * and not checking for prefix key equality yet, so we can't assume - * the group pivot tuple will reamin the same -- unless we're using - * a minimum group size of 1, in which case the pivot is obviously - * still the pviot. + * and not checking for prefix key equality yet, so we can't + * assume the group pivot tuple will reamin the same -- unless + * we're using a minimum group size of 1, in which case the pivot + * is obviously still the pviot. */ if (nTuples != minGroupSize) ExecClearTuple(node->group_pivot); @@ -651,8 +660,8 @@ ExecIncrementalSort(PlanState *pstate) if (pstate->instrument != NULL) instrumentSortedGroup(pstate, - &node->incsort_info.fullsortGroupInfo, - fullsort_state); + &node->incsort_info.fullsortGroupInfo, + fullsort_state); SO_printf("Setting execution_status to INCSORT_READFULLSORT (final tuple) \n"); node->execution_status = INCSORT_READFULLSORT; @@ -663,9 +672,10 @@ ExecIncrementalSort(PlanState *pstate) if (nTuples < minGroupSize) { /* - * If we have yet hit our target minimum group size, then don't - * both with checking for inclusion in the current prefix group - * since a large number of very tiny sorts is inefficient. + * If we have yet hit our target minimum group size, then + * don't both with checking for inclusion in the current + * prefix group since a large number of very tiny sorts is + * inefficient. */ tuplesort_puttupleslot(fullsort_state, slot); nTuples++; @@ -695,11 +705,11 @@ ExecIncrementalSort(PlanState *pstate) { /* * Since the tuple we fetched isn't part of the current - * prefix key group we can't sort it as part of this - * sort group. Instead we need to carry it over to the - * next group. We use the group_pivot slot as a temp - * container for that purpose even though we won't actually - * treat it as a group pivot. + * prefix key group we can't sort it as part of this sort + * group. Instead we need to carry it over to the next + * group. We use the group_pivot slot as a temp container + * for that purpose even though we won't actually treat it + * as a group pivot. */ ExecCopySlot(node->group_pivot, slot); @@ -707,13 +717,13 @@ ExecIncrementalSort(PlanState *pstate) { /* * If the current node has a bound, and we've already - * sorted n tuples, then the functional bound remaining - * is (original bound - n), so store the current number - * of processed tuples for use in configuring sorting - * bound. + * sorted n tuples, then the functional bound + * remaining is (original bound - n), so store the + * current number of processed tuples for use in + * configuring sorting bound. */ SO2_printf("Changing bound_Done from %ld to %ld\n", - Min(node->bound, node->bound_Done + nTuples), node->bound_Done); + Min(node->bound, node->bound_Done + nTuples), node->bound_Done); node->bound_Done = Min(node->bound, node->bound_Done + nTuples); } @@ -726,8 +736,8 @@ ExecIncrementalSort(PlanState *pstate) if (pstate->instrument != NULL) instrumentSortedGroup(pstate, - &node->incsort_info.fullsortGroupInfo, - fullsort_state); + &node->incsort_info.fullsortGroupInfo, + fullsort_state); SO_printf("Setting execution_status to INCSORT_READFULLSORT (found end of group)\n"); node->execution_status = INCSORT_READFULLSORT; @@ -737,17 +747,17 @@ ExecIncrementalSort(PlanState *pstate) /* * Once we've processed DEFAULT_MAX_FULL_SORT_GROUP_SIZE tuples - * then we make the assumption that it's likely that we've found - * a large group of tuples having a single prefix key (as long - * as the last tuple didn't shift us into reading from the full - * sort mode tuplesort). + * then we make the assumption that it's likely that we've found a + * large group of tuples having a single prefix key (as long as + * the last tuple didn't shift us into reading from the full sort + * mode tuplesort). */ if (nTuples > DEFAULT_MAX_FULL_SORT_GROUP_SIZE && - node->execution_status != INCSORT_READFULLSORT) + node->execution_status != INCSORT_READFULLSORT) { /* - * The group pivot we have stored has already been put into the - * tuplesort; we don't want to carry it over. + * The group pivot we have stored has already been put into + * the tuplesort; we don't want to carry it over. */ ExecClearTuple(node->group_pivot); @@ -762,33 +772,36 @@ ExecIncrementalSort(PlanState *pstate) tuplesort_performsort(fullsort_state); if (pstate->instrument != NULL) instrumentSortedGroup(pstate, - &node->incsort_info.fullsortGroupInfo, - fullsort_state); + &node->incsort_info.fullsortGroupInfo, + fullsort_state); /* - * If the full sort tuplesort happened to switch into top-n heapsort mode - * then we will only be able to retrieve currentBound tuples (since the - * tuplesort will have only retained the top-n tuples). This is safe even - * though we haven't yet completed fetching the current prefix key group - * because the tuples we've "lost" already sorted "below" the retained ones, - * and we're already contractually guaranteed to not need any more than the - * currentBount tuples. + * If the full sort tuplesort happened to switch into top-n + * heapsort mode then we will only be able to retrieve + * currentBound tuples (since the tuplesort will have only + * retained the top-n tuples). This is safe even though we + * haven't yet completed fetching the current prefix key group + * because the tuples we've "lost" already sorted "below" the + * retained ones, and we're already contractually guaranteed + * to not need any more than the currentBount tuples. */ if (tuplesort_used_bound(node->fullsort_state)) { - int64 currentBound = node->bound - node->bound_Done; + int64 currentBound = node->bound - node->bound_Done; + SO2_printf("Read %ld tuples, but setting to %ld because we used bounded sort\n", - nTuples, Min(currentBound, nTuples)); + nTuples, Min(currentBound, nTuples)); nTuples = Min(currentBound, nTuples); } SO1_printf("Setting n_fullsort_remaining to %ld and calling switchToPresortedPrefixMode()\n", - nTuples); + nTuples); /* - * Track the number of tuples we need to move from the fullsort - * to presorted prefix sort (we might have multiple prefix key - * groups, so we need a way to see if we've actually finished). + * Track the number of tuples we need to move from the + * fullsort to presorted prefix sort (we might have multiple + * prefix key groups, so we need a way to see if we've + * actually finished). */ node->n_fullsort_remaining = nTuples; @@ -800,11 +813,12 @@ ExecIncrementalSort(PlanState *pstate) * tuplesort, we know that unless that transition has verified * that all tuples belonged to the same prefix key group (in * which case we can go straight to continuing to load tuples - * into that tuplesort), we should have a tuple to return here. + * into that tuplesort), we should have a tuple to return + * here. * * Either way, the appropriate execution status should have - * been set by switchToPresortedPrefixMode(), so we can drop out - * of the loop here and let the appropriate path kick in. + * been set by switchToPresortedPrefixMode(), so we can drop + * out of the loop here and let the appropriate path kick in. */ break; } @@ -815,9 +829,9 @@ ExecIncrementalSort(PlanState *pstate) { /* * Since we only enter this state after determining that all remaining - * tuples in the full sort tuplesort have the same prefix, we've already - * established a current group pivot tuple (but wasn't carried over; - * it's already been put into the prefix sort tuplesort). + * tuples in the full sort tuplesort have the same prefix, we've + * already established a current group pivot tuple (but wasn't carried + * over; it's already been put into the prefix sort tuplesort). */ Assert(!TupIsNull(node->group_pivot)); @@ -835,9 +849,10 @@ ExecIncrementalSort(PlanState *pstate) if (isCurrentGroup(node, node->group_pivot, slot)) { /* - * Fetch tuples and put them into the presorted prefix tuplesort - * until we find changed prefix keys. Only then can we guarantee - * sort stability of the tuples we've already accumulated. + * Fetch tuples and put them into the presorted prefix + * tuplesort until we find changed prefix keys. Only then can + * we guarantee sort stability of the tuples we've already + * accumulated. */ tuplesort_puttupleslot(node->prefixsort_state, slot); nTuples++; @@ -862,8 +877,8 @@ ExecIncrementalSort(PlanState *pstate) if (pstate->instrument != NULL) instrumentSortedGroup(pstate, - &node->incsort_info.prefixsortGroupInfo, - node->prefixsort_state); + &node->incsort_info.prefixsortGroupInfo, + node->prefixsort_state); SO_printf("Setting execution_status to INCSORT_READPREFIXSORT (found end of group)\n"); node->execution_status = INCSORT_READPREFIXSORT; @@ -872,12 +887,12 @@ ExecIncrementalSort(PlanState *pstate) { /* * If the current node has a bound, and we've already sorted n - * tuples, then the functional bound remaining is - * (original bound - n), so store the current number of processed - * tuples for use in configuring sorting bound. + * tuples, then the functional bound remaining is (original bound + * - n), so store the current number of processed tuples for use + * in configuring sorting bound. */ SO2_printf("Changing bound_Done from %ld to %ld\n", - Min(node->bound, node->bound_Done + nTuples), node->bound_Done); + Min(node->bound, node->bound_Done + nTuples), node->bound_Done); node->bound_Done = Min(node->bound, node->bound_Done + nTuples); } } @@ -913,7 +928,7 @@ ExecIncrementalSort(PlanState *pstate) IncrementalSortState * ExecInitIncrementalSort(IncrementalSort *node, EState *estate, int eflags) { - IncrementalSortState *incrsortstate; + IncrementalSortState *incrsortstate; SO_printf("ExecInitIncrementalSort: initializing sort node\n"); @@ -948,9 +963,10 @@ ExecInitIncrementalSort(IncrementalSort *node, EState *estate, int eflags) if (incrsortstate->ss.ps.instrument != NULL) { IncrementalSortGroupInfo *fullsortGroupInfo = - &incrsortstate->incsort_info.fullsortGroupInfo; + &incrsortstate->incsort_info.fullsortGroupInfo; IncrementalSortGroupInfo *prefixsortGroupInfo = - &incrsortstate->incsort_info.prefixsortGroupInfo; + &incrsortstate->incsort_info.prefixsortGroupInfo; + fullsortGroupInfo->groupCount = 0; fullsortGroupInfo->maxDiskSpaceUsed = 0; fullsortGroupInfo->totalDiskSpaceUsed = 0; @@ -988,17 +1004,17 @@ ExecInitIncrementalSort(IncrementalSort *node, EState *estate, int eflags) ExecCreateScanSlotFromOuterPlan(estate, &incrsortstate->ss, &TTSOpsMinimalTuple); /* - * Initialize return slot and type. No need to initialize projection info because - * this node doesn't do projections. + * Initialize return slot and type. No need to initialize projection info + * because this node doesn't do projections. */ ExecInitResultTupleSlotTL(&incrsortstate->ss.ps, &TTSOpsMinimalTuple); incrsortstate->ss.ps.ps_ProjInfo = NULL; /* make standalone slot to store previous tuple from outer node */ incrsortstate->group_pivot = MakeSingleTupleTableSlot( - ExecGetResultType(outerPlanState(incrsortstate)), &TTSOpsMinimalTuple); + ExecGetResultType(outerPlanState(incrsortstate)), &TTSOpsMinimalTuple); incrsortstate->transfer_tuple = MakeSingleTupleTableSlot( - ExecGetResultType(outerPlanState(incrsortstate)), &TTSOpsMinimalTuple); + ExecGetResultType(outerPlanState(incrsortstate)), &TTSOpsMinimalTuple); SO_printf("ExecInitIncrementalSort: sort node initialized\n"); diff --git a/src/backend/nodes/copyfuncs.c b/src/backend/nodes/copyfuncs.c index d2b9bd95ba..12d5d4523d 100644 --- a/src/backend/nodes/copyfuncs.c +++ b/src/backend/nodes/copyfuncs.c @@ -968,7 +968,7 @@ _copySort(const Sort *from) static IncrementalSort * _copyIncrementalSort(const IncrementalSort *from) { - IncrementalSort *newnode = makeNode(IncrementalSort); + IncrementalSort *newnode = makeNode(IncrementalSort); /* * copy node superclass fields @@ -3541,7 +3541,7 @@ _copyCreateStatsStmt(const CreateStatsStmt *from) } static AlterStatsStmt * -_copyAlterStatsStmt(const AlterStatsStmt *from) +_copyAlterStatsStmt(const AlterStatsStmt * from) { AlterStatsStmt *newnode = makeNode(AlterStatsStmt); @@ -3671,7 +3671,7 @@ _copyAlterOperatorStmt(const AlterOperatorStmt *from) } static AlterTypeStmt * -_copyAlterTypeStmt(const AlterTypeStmt *from) +_copyAlterTypeStmt(const AlterTypeStmt * from) { AlterTypeStmt *newnode = makeNode(AlterTypeStmt); diff --git a/src/backend/nodes/outfuncs.c b/src/backend/nodes/outfuncs.c index 6c83372c9f..5620b24ac5 100644 --- a/src/backend/nodes/outfuncs.c +++ b/src/backend/nodes/outfuncs.c @@ -2694,7 +2694,7 @@ _outCreateStatsStmt(StringInfo str, const CreateStatsStmt *node) } static void -_outAlterStatsStmt(StringInfo str, const AlterStatsStmt *node) +_outAlterStatsStmt(StringInfo str, const AlterStatsStmt * node) { WRITE_NODE_TYPE("ALTERSTATSSTMT"); @@ -3670,7 +3670,7 @@ outNode(StringInfo str, const void *obj) if (obj == NULL) appendStringInfoString(str, "<>"); - else if (IsA(obj, List) ||IsA(obj, IntList) || IsA(obj, OidList)) + else if (IsA(obj, List) || IsA(obj, IntList) || IsA(obj, OidList)) _outList(str, obj); else if (IsA(obj, Integer) || IsA(obj, Float) || diff --git a/src/backend/optimizer/path/allpaths.c b/src/backend/optimizer/path/allpaths.c index 8d9c25e18f..e0bb71dd51 100644 --- a/src/backend/optimizer/path/allpaths.c +++ b/src/backend/optimizer/path/allpaths.c @@ -2804,9 +2804,9 @@ get_useful_pathkeys_for_relation(PlannerInfo *root, RelOptInfo *rel) } /* - * This ends up allowing us to do incremental sort on top of - * an index scan all parallelized under a gather merge node. - */ + * This ends up allowing us to do incremental sort on top of an index + * scan all parallelized under a gather merge node. + */ if (query_pathkeys_ok) useful_pathkeys_list = list_make1(list_copy(root->query_pathkeys)); } @@ -2897,7 +2897,7 @@ generate_useful_gather_paths(PlannerInfo *root, RelOptInfo *rel, bool override_r */ if (cheapest_partial_path == subpath) { - Path *tmp; + Path *tmp; tmp = (Path *) create_sort_path(root, rel, @@ -2922,7 +2922,7 @@ generate_useful_gather_paths(PlannerInfo *root, RelOptInfo *rel, bool override_r /* finally, consider incremental sort */ if (presorted_keys > 0) { - Path *tmp; + Path *tmp; /* Also consider incremental sort. */ tmp = (Path *) create_incremental_sort_path(root, diff --git a/src/backend/optimizer/path/costsize.c b/src/backend/optimizer/path/costsize.c index d1748d1011..152e016228 100644 --- a/src/backend/optimizer/path/costsize.c +++ b/src/backend/optimizer/path/costsize.c @@ -1683,9 +1683,9 @@ cost_recursive_union(Path *runion, Path *nrterm, Path *rterm) */ static void cost_tuplesort(Cost *startup_cost, Cost *run_cost, - double tuples, int width, - Cost comparison_cost, int sort_mem, - double limit_tuples) + double tuples, int width, + Cost comparison_cost, int sort_mem, + double limit_tuples) { double input_bytes = relation_byte_size(tuples, width); double output_bytes; @@ -1810,10 +1810,10 @@ cost_full_sort(Cost *startup_cost, Cost *run_cost, */ void cost_incremental_sort(Path *path, - PlannerInfo *root, List *pathkeys, int presorted_keys, - Cost input_startup_cost, Cost input_total_cost, - double input_tuples, int width, Cost comparison_cost, int sort_mem, - double limit_tuples) + PlannerInfo *root, List *pathkeys, int presorted_keys, + Cost input_startup_cost, Cost input_total_cost, + double input_tuples, int width, Cost comparison_cost, int sort_mem, + double limit_tuples) { Cost startup_cost = 0, run_cost = 0, @@ -1839,9 +1839,9 @@ cost_incremental_sort(Path *path, /* Extract presorted keys as list of expressions */ foreach(l, pathkeys) { - PathKey *key = (PathKey *)lfirst(l); + PathKey *key = (PathKey *) lfirst(l); EquivalenceMember *member = (EquivalenceMember *) - linitial(key->pk_eclass->ec_members); + linitial(key->pk_eclass->ec_members); presortedExprs = lappend(presortedExprs, member->em_expr); @@ -1856,11 +1856,11 @@ cost_incremental_sort(Path *path, group_input_run_cost = input_run_cost / input_groups; /* - * Estimate average cost of sorting of one group where presorted keys - * are equal. Incremental sort is sensitive to distribution of tuples - * to the groups, where we're relying on quite rough assumptions. Thus, - * we're pessimistic about incremental sort performance and increase - * its average group size by half. + * Estimate average cost of sorting of one group where presorted keys are + * equal. Incremental sort is sensitive to distribution of tuples to the + * groups, where we're relying on quite rough assumptions. Thus, we're + * pessimistic about incremental sort performance and increase its average + * group size by half. */ cost_tuplesort(&group_startup_cost, &group_run_cost, 1.5 * group_tuples, width, comparison_cost, sort_mem, @@ -1875,9 +1875,9 @@ cost_incremental_sort(Path *path, /* * After we started producing tuples from the first group, the cost of - * producing all the tuples is given by the cost to finish processing - * this group, plus the total cost to process the remaining groups, - * plus the remaining cost of input. + * producing all the tuples is given by the cost to finish processing this + * group, plus the total cost to process the remaining groups, plus the + * remaining cost of input. */ run_cost += group_run_cost + (group_run_cost + group_startup_cost) * (input_groups - 1) @@ -1916,8 +1916,8 @@ cost_sort(Path *path, PlannerInfo *root, double limit_tuples) { - Cost startup_cost; - Cost run_cost; + Cost startup_cost; + Cost run_cost; cost_full_sort(&startup_cost, &run_cost, input_cost, @@ -3173,7 +3173,7 @@ final_cost_mergejoin(PlannerInfo *root, MergePath *path, * The whole issue is moot if we are working from a unique-ified outer * input, or if we know we don't need to mark/restore at all. */ - if (IsA(outer_path, UniquePath) ||path->skip_mark_restore) + if (IsA(outer_path, UniquePath) || path->skip_mark_restore) rescannedtuples = 0; else { diff --git a/src/backend/optimizer/path/pathkeys.c b/src/backend/optimizer/path/pathkeys.c index 6e2ba08d7b..74799cd8fd 100644 --- a/src/backend/optimizer/path/pathkeys.c +++ b/src/backend/optimizer/path/pathkeys.c @@ -372,7 +372,7 @@ pathkeys_common_contained_in(List *keys1, List *keys2, int *n_common) int pathkeys_common(List *keys1, List *keys2) { - int n; + int n; (void) pathkeys_common_contained_in(keys1, keys2, &n); return n; @@ -1838,7 +1838,7 @@ right_merge_direction(PlannerInfo *root, PathKey *pathkey) static int pathkeys_useful_for_ordering(PlannerInfo *root, List *pathkeys) { - int n_common_pathkeys; + int n_common_pathkeys; if (root->query_pathkeys == NIL) return 0; /* no special ordering requested */ @@ -1850,9 +1850,9 @@ pathkeys_useful_for_ordering(PlannerInfo *root, List *pathkeys) &n_common_pathkeys); /* - * Return the number of path keys in common, or 0 if there are none. - * Any leading common pathkeys could be useful for ordering because - * we can use the incremental sort. + * Return the number of path keys in common, or 0 if there are none. Any + * leading common pathkeys could be useful for ordering because we can use + * the incremental sort. */ return n_common_pathkeys; } diff --git a/src/backend/optimizer/plan/createplan.c b/src/backend/optimizer/plan/createplan.c index 53d08aed2e..026a60b946 100644 --- a/src/backend/optimizer/plan/createplan.c +++ b/src/backend/optimizer/plan/createplan.c @@ -99,7 +99,7 @@ static Plan *create_projection_plan(PlannerInfo *root, static Plan *inject_projection_plan(Plan *subplan, List *tlist, bool parallel_safe); static Sort *create_sort_plan(PlannerInfo *root, SortPath *best_path, int flags); static IncrementalSort *create_incrementalsort_plan(PlannerInfo *root, - IncrementalSortPath *best_path, int flags); + IncrementalSortPath *best_path, int flags); static Group *create_group_plan(PlannerInfo *root, GroupPath *best_path); static Unique *create_upper_unique_plan(PlannerInfo *root, UpperUniquePath *best_path, int flags); @@ -247,9 +247,9 @@ static Sort *make_sort(Plan *lefttree, int numCols, AttrNumber *sortColIdx, Oid *sortOperators, Oid *collations, bool *nullsFirst); static IncrementalSort *make_incrementalsort(Plan *lefttree, - int numCols, int presortedCols, - AttrNumber *sortColIdx, Oid *sortOperators, - Oid *collations, bool *nullsFirst); + int numCols, int presortedCols, + AttrNumber *sortColIdx, Oid *sortOperators, + Oid *collations, bool *nullsFirst); static Plan *prepare_sort_from_pathkeys(Plan *lefttree, List *pathkeys, Relids relids, const AttrNumber *reqColIdx, @@ -265,7 +265,7 @@ static EquivalenceMember *find_ec_member_for_tle(EquivalenceClass *ec, static Sort *make_sort_from_pathkeys(Plan *lefttree, List *pathkeys, Relids relids); static IncrementalSort *make_incrementalsort_from_pathkeys(Plan *lefttree, - List *pathkeys, Relids relids, int presortedCols); + List *pathkeys, Relids relids, int presortedCols); static Sort *make_sort_from_groupcols(List *groupcls, AttrNumber *grpColIdx, Plan *lefttree); @@ -2016,17 +2016,17 @@ static IncrementalSort * create_incrementalsort_plan(PlannerInfo *root, IncrementalSortPath *best_path, int flags) { - IncrementalSort *plan; - Plan *subplan; + IncrementalSort *plan; + Plan *subplan; /* See comments in create_sort_plan() above */ subplan = create_plan_recurse(root, best_path->spath.subpath, flags | CP_SMALL_TLIST); plan = make_incrementalsort_from_pathkeys(subplan, - best_path->spath.path.pathkeys, - IS_OTHER_REL(best_path->spath.subpath->parent) ? - best_path->spath.path.parent->relids : NULL, - best_path->presortedCols); + best_path->spath.path.pathkeys, + IS_OTHER_REL(best_path->spath.subpath->parent) ? + best_path->spath.path.parent->relids : NULL, + best_path->presortedCols); copy_generic_path_info(&plan->sort.plan, (Path *) best_path); @@ -5131,18 +5131,18 @@ label_sort_with_costsize(PlannerInfo *root, Sort *plan, double limit_tuples) run_cost; /* - * This function shouldn't have to deal with IncrementalSort plans - * because they are only created from corresponding Path nodes. + * This function shouldn't have to deal with IncrementalSort plans because + * they are only created from corresponding Path nodes. */ Assert(IsA(plan, Sort)); cost_full_sort(&startup_cost, &run_cost, - lefttree->total_cost, - lefttree->plan_rows, - lefttree->plan_width, - 0.0, - work_mem, - limit_tuples); + lefttree->total_cost, + lefttree->plan_rows, + lefttree->plan_width, + 0.0, + work_mem, + limit_tuples); plan->plan.startup_cost = startup_cost; plan->plan.total_cost = startup_cost + run_cost; plan->plan.plan_rows = lefttree->plan_rows; @@ -5750,11 +5750,11 @@ make_sort(Plan *lefttree, int numCols, */ static IncrementalSort * make_incrementalsort(Plan *lefttree, int numCols, int presortedCols, - AttrNumber *sortColIdx, Oid *sortOperators, - Oid *collations, bool *nullsFirst) + AttrNumber *sortColIdx, Oid *sortOperators, + Oid *collations, bool *nullsFirst) { - IncrementalSort *node; - Plan *plan; + IncrementalSort *node; + Plan *plan; node = makeNode(IncrementalSort); @@ -6130,7 +6130,7 @@ make_sort_from_pathkeys(Plan *lefttree, List *pathkeys, Relids relids) */ static IncrementalSort * make_incrementalsort_from_pathkeys(Plan *lefttree, List *pathkeys, - Relids relids, int presortedCols) + Relids relids, int presortedCols) { int numsortkeys; AttrNumber *sortColIdx; diff --git a/src/backend/optimizer/plan/planner.c b/src/backend/optimizer/plan/planner.c index 15223017c0..330a6a2f6c 100644 --- a/src/backend/optimizer/plan/planner.c +++ b/src/backend/optimizer/plan/planner.c @@ -4868,7 +4868,7 @@ create_distinct_paths(PlannerInfo *root, else { Size hashentrysize = hash_agg_entry_size( - 0, cheapest_input_path->pathtarget->width, 0); + 0, cheapest_input_path->pathtarget->width, 0); /* Allow hashing only if hashtable is predicted to fit in work_mem */ allow_hash = (hashentrysize * numDistinctRows <= work_mem * 1024L); @@ -4986,8 +4986,8 @@ create_ordered_paths(PlannerInfo *root, if (input_path == cheapest_input_path) { /* - * Sort the cheapest input path. An explicit sort here can take - * advantage of LIMIT. + * Sort the cheapest input path. An explicit sort here can + * take advantage of LIMIT. */ sorted_path = (Path *) create_sort_path(root, ordered_rel, @@ -5079,9 +5079,9 @@ create_ordered_paths(PlannerInfo *root, */ if (enable_incrementalsort) { - ListCell *lc; + ListCell *lc; - foreach (lc, input_rel->partial_pathlist) + foreach(lc, input_rel->partial_pathlist) { Path *input_path = (Path *) lfirst(lc); Path *sorted_path = input_path; @@ -5091,9 +5091,9 @@ create_ordered_paths(PlannerInfo *root, /* * We don't care if this is the cheapest partial path - we - * can't simply skip it, because it may be partially sorted - * in which case we want to consider incremental sort on top - * of it (instead of full sort, which is what happens above). + * can't simply skip it, because it may be partially sorted in + * which case we want to consider incremental sort on top of + * it (instead of full sort, which is what happens above). */ is_sorted = pathkeys_common_contained_in(root->sort_pathkeys, @@ -6586,8 +6586,8 @@ add_paths_to_grouping_rel(PlannerInfo *root, RelOptInfo *input_rel, else if (parse->hasAggs) { /* - * We have aggregation, possibly with plain GROUP BY. Make - * an AggPath. + * We have aggregation, possibly with plain GROUP BY. Make an + * AggPath. */ add_path(grouped_rel, (Path *) create_agg_path(root, @@ -6604,8 +6604,8 @@ add_paths_to_grouping_rel(PlannerInfo *root, RelOptInfo *input_rel, else if (parse->groupClause) { /* - * We have GROUP BY without aggregation or grouping sets. - * Make a GroupPath. + * We have GROUP BY without aggregation or grouping sets. Make + * a GroupPath. */ add_path(grouped_rel, (Path *) create_group_path(root, @@ -6676,8 +6676,8 @@ add_paths_to_grouping_rel(PlannerInfo *root, RelOptInfo *input_rel, /* * Now we may consider incremental sort on this path, but only - * when the path is not already sorted and when incremental sort - * is enabled. + * when the path is not already sorted and when incremental + * sort is enabled. */ if (is_sorted || !enable_incrementalsort) continue; @@ -7273,7 +7273,7 @@ gather_grouping_paths(PlannerInfo *root, RelOptInfo *rel) return; /* also consider incremental sort on partial paths, if enabled */ - foreach (lc, rel->partial_pathlist) + foreach(lc, rel->partial_pathlist) { Path *path = (Path *) lfirst(lc); bool is_sorted; diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c index 2b676bf406..baefe0e946 100644 --- a/src/backend/optimizer/plan/setrefs.c +++ b/src/backend/optimizer/plan/setrefs.c @@ -1960,7 +1960,7 @@ set_upper_references(PlannerInfo *root, Plan *plan, int rtoffset) static void set_param_references(PlannerInfo *root, Plan *plan) { - Assert(IsA(plan, Gather) ||IsA(plan, GatherMerge)); + Assert(IsA(plan, Gather) || IsA(plan, GatherMerge)); if (plan->lefttree->extParam) { diff --git a/src/backend/optimizer/util/pathnode.c b/src/backend/optimizer/util/pathnode.c index 88402a9033..35773cc2c7 100644 --- a/src/backend/optimizer/util/pathnode.c +++ b/src/backend/optimizer/util/pathnode.c @@ -2627,7 +2627,7 @@ apply_projection_to_path(PlannerInfo *root, * workers can help project. But if there is something that is not * parallel-safe in the target expressions, then we can't. */ - if ((IsA(path, GatherPath) ||IsA(path, GatherMergePath)) && + if ((IsA(path, GatherPath) || IsA(path, GatherMergePath)) && is_parallel_safe(root, (Node *) target->exprs)) { /* @@ -2755,14 +2755,14 @@ create_set_projection_path(PlannerInfo *root, */ SortPath * create_incremental_sort_path(PlannerInfo *root, - RelOptInfo *rel, - Path *subpath, - List *pathkeys, - int presorted_keys, - double limit_tuples) + RelOptInfo *rel, + Path *subpath, + List *pathkeys, + int presorted_keys, + double limit_tuples) { IncrementalSortPath *sort = makeNode(IncrementalSortPath); - SortPath *pathnode = &sort->spath; + SortPath *pathnode = &sort->spath; pathnode->path.pathtype = T_IncrementalSort; pathnode->path.parent = rel; @@ -2779,13 +2779,13 @@ create_incremental_sort_path(PlannerInfo *root, pathnode->subpath = subpath; cost_incremental_sort(&pathnode->path, - root, pathkeys, presorted_keys, - subpath->startup_cost, - subpath->total_cost, - subpath->rows, - subpath->pathtarget->width, - 0.0, /* XXX comparison_cost shouldn't be 0? */ - work_mem, limit_tuples); + root, pathkeys, presorted_keys, + subpath->startup_cost, + subpath->total_cost, + subpath->rows, + subpath->pathtarget->width, + 0.0, /* XXX comparison_cost shouldn't be 0? */ + work_mem, limit_tuples); sort->presortedCols = presorted_keys; diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c index 4949ef2079..ca34552687 100644 --- a/src/backend/utils/misc/guc.c +++ b/src/backend/utils/misc/guc.c @@ -11630,7 +11630,7 @@ check_backtrace_functions(char **newval, void **extra, GucSource source) else if ((*newval)[i] == ' ' || (*newval)[i] == '\n' || (*newval)[i] == '\t') - ; /* ignore these */ + ; /* ignore these */ else someval[j++] = (*newval)[i]; /* copy anything else */ } diff --git a/src/backend/utils/sort/tuplesort.c b/src/backend/utils/sort/tuplesort.c index c2bd38f39f..2c2efff0a6 100644 --- a/src/backend/utils/sort/tuplesort.c +++ b/src/backend/utils/sort/tuplesort.c @@ -251,13 +251,13 @@ struct Tuplesortstate int maxTapes; /* number of tapes (Knuth's T) */ int tapeRange; /* maxTapes-1 (Knuth's P) */ int64 maxSpace; /* maximum amount of space occupied among sort - of groups, either in-memory or on-disk */ - bool maxSpaceOnDisk; /* true when maxSpace is value for on-disk - space, false when it's value for in-memory - space */ - TupSortStatus maxSpaceStatus; /* sort status when maxSpace was reached */ - MemoryContext maincontext; /* memory context for tuple sort metadata - that persist across multiple batches */ + * of groups, either in-memory or on-disk */ + bool maxSpaceOnDisk; /* true when maxSpace is value for on-disk + * space, false when it's value for in-memory + * space */ + TupSortStatus maxSpaceStatus; /* sort status when maxSpace was reached */ + MemoryContext maincontext; /* memory context for tuple sort metadata that + * persist across multiple batches */ MemoryContext sortcontext; /* memory context holding most sort data */ MemoryContext tuplecontext; /* sub-context of sortcontext for tuple data */ LogicalTapeSet *tapeset; /* logtape.c object for tapes in a temp file */ @@ -1350,8 +1350,8 @@ tuplesort_end(Tuplesortstate *state) static void tuplesort_updatemax(Tuplesortstate *state) { - int64 spaceUsed; - bool spaceUsedOnDisk; + int64 spaceUsed; + bool spaceUsedOnDisk; /* * Note: it might seem we should provide both memory and disk usage for a @@ -1374,9 +1374,9 @@ tuplesort_updatemax(Tuplesortstate *state) } /* - * Sort evicts data to the disk when it didn't manage to fit those data - * to the main memory. This is why we assume space used on the disk to - * be more important for tracking resource usage than space used in memory. + * Sort evicts data to the disk when it didn't manage to fit those data to + * the main memory. This is why we assume space used on the disk to be + * more important for tracking resource usage than space used in memory. * Note that amount of space occupied by some tuple set on the disk might * be less than amount of space occupied by the same tuple set in the * memory due to more compact representation. @@ -2775,7 +2775,7 @@ mergeruns(Tuplesortstate *state) */ state->memtupsize = numInputTapes; state->memtuples = (SortTuple *) MemoryContextAlloc(state->maincontext, - numInputTapes * sizeof(SortTuple)); + numInputTapes * sizeof(SortTuple)); USEMEM(state, GetMemoryChunkSpace(state->memtuples)); /* diff --git a/src/include/executor/nodeIncrementalSort.h b/src/include/executor/nodeIncrementalSort.h index 3113989272..e62c02a4f3 100644 --- a/src/include/executor/nodeIncrementalSort.h +++ b/src/include/executor/nodeIncrementalSort.h @@ -25,4 +25,4 @@ extern void ExecIncrementalSortInitializeDSM(IncrementalSortState *node, Paralle extern void ExecIncrementalSortInitializeWorker(IncrementalSortState *node, ParallelWorkerContext *pcxt); extern void ExecIncrementalSortRetrieveInstrumentation(IncrementalSortState *node); -#endif /* NODEINCREMENTALSORT_H */ +#endif /* NODEINCREMENTALSORT_H */ diff --git a/src/include/nodes/execnodes.h b/src/include/nodes/execnodes.h index 0934482123..c96f03e48d 100644 --- a/src/include/nodes/execnodes.h +++ b/src/include/nodes/execnodes.h @@ -1989,9 +1989,9 @@ typedef struct MaterialState */ typedef struct PresortedKeyData { - FmgrInfo flinfo; /* comparison function info */ - FunctionCallInfo fcinfo; /* comparison function call info */ - OffsetNumber attno; /* attribute number in tuple */ + FmgrInfo flinfo; /* comparison function info */ + FunctionCallInfo fcinfo; /* comparison function call info */ + OffsetNumber attno; /* attribute number in tuple */ } PresortedKeyData; /* ---------------- @@ -2024,12 +2024,12 @@ typedef struct SortState typedef struct IncrementalSortGroupInfo { - int64 groupCount; - long maxDiskSpaceUsed; - long totalDiskSpaceUsed; - long maxMemorySpaceUsed; - long totalMemorySpaceUsed; - List *sortMethods; + int64 groupCount; + long maxDiskSpaceUsed; + long totalDiskSpaceUsed; + long maxMemorySpaceUsed; + long totalMemorySpaceUsed; + List *sortMethods; } IncrementalSortGroupInfo; typedef struct IncrementalSortInfo @@ -2044,8 +2044,8 @@ typedef struct IncrementalSortInfo */ typedef struct SharedIncrementalSortInfo { - int num_workers; - IncrementalSortInfo sinfo[FLEXIBLE_ARRAY_MEMBER]; + int num_workers; + IncrementalSortInfo sinfo[FLEXIBLE_ARRAY_MEMBER]; } SharedIncrementalSortInfo; /* ---------------- @@ -2066,13 +2066,13 @@ typedef struct IncrementalSortState bool bounded; /* is the result set bounded? */ int64 bound; /* if bounded, how many tuples are needed */ bool sort_Done; /* sort completed yet? */ - bool finished; /* fetching tuples from outer node - is finished ? */ + bool finished; /* fetching tuples from outer node is finished + * ? */ int64 bound_Done; /* value of bound we did the sort with */ IncrementalSortExecutionStatus execution_status; - int64 n_fullsort_remaining; - Tuplesortstate *fullsort_state; /* private state of tuplesort.c */ - Tuplesortstate *prefixsort_state; /* private state of tuplesort.c */ + int64 n_fullsort_remaining; + Tuplesortstate *fullsort_state; /* private state of tuplesort.c */ + Tuplesortstate *prefixsort_state; /* private state of tuplesort.c */ /* the keys by which the input path is already sorted */ PresortedKeyData *presorted_keys; @@ -2082,7 +2082,7 @@ typedef struct IncrementalSortState TupleTableSlot *group_pivot; TupleTableSlot *transfer_tuple; bool am_worker; /* are we a worker? */ - SharedIncrementalSortInfo *shared_info; /* one entry per worker */ + SharedIncrementalSortInfo *shared_info; /* one entry per worker */ } IncrementalSortState; /* --------------------- diff --git a/src/include/optimizer/cost.h b/src/include/optimizer/cost.h index bfee4db721..34f18bd73a 100644 --- a/src/include/optimizer/cost.h +++ b/src/include/optimizer/cost.h @@ -103,14 +103,14 @@ extern void cost_sort(Path *path, PlannerInfo *root, Cost comparison_cost, int sort_mem, double limit_tuples); extern void cost_full_sort(Cost *startup_cost, Cost *run_cost, - Cost input_total_cost, double tuples, int width, - Cost comparison_cost, int sort_mem, - double limit_tuples); + Cost input_total_cost, double tuples, int width, + Cost comparison_cost, int sort_mem, + double limit_tuples); extern void cost_incremental_sort(Path *path, - PlannerInfo *root, List *pathkeys, int presorted_keys, - Cost input_startup_cost, Cost input_total_cost, - double input_tuples, int width, Cost comparison_cost, int sort_mem, - double limit_tuples); + PlannerInfo *root, List *pathkeys, int presorted_keys, + Cost input_startup_cost, Cost input_total_cost, + double input_tuples, int width, Cost comparison_cost, int sort_mem, + double limit_tuples); extern void cost_append(AppendPath *path); extern void cost_merge_append(Path *path, PlannerInfo *root, List *pathkeys, int n_streams, diff --git a/src/include/optimizer/pathnode.h b/src/include/optimizer/pathnode.h index 57ecbbb01c..bcd08af753 100644 --- a/src/include/optimizer/pathnode.h +++ b/src/include/optimizer/pathnode.h @@ -185,11 +185,11 @@ extern ProjectSetPath *create_set_projection_path(PlannerInfo *root, Path *subpath, PathTarget *target); extern SortPath *create_incremental_sort_path(PlannerInfo *root, - RelOptInfo *rel, - Path *subpath, - List *pathkeys, - int presorted_keys, - double limit_tuples); + RelOptInfo *rel, + Path *subpath, + List *pathkeys, + int presorted_keys, + double limit_tuples); extern SortPath *create_sort_path(PlannerInfo *root, RelOptInfo *rel, Path *subpath, diff --git a/src/include/optimizer/paths.h b/src/include/optimizer/paths.h index d778b884a9..f6994779de 100644 --- a/src/include/optimizer/paths.h +++ b/src/include/optimizer/paths.h @@ -191,7 +191,7 @@ typedef enum extern PathKeysComparison compare_pathkeys(List *keys1, List *keys2); extern bool pathkeys_contained_in(List *keys1, List *keys2); extern bool pathkeys_common_contained_in(List *keys1, List *keys2, int *n_common); -extern int pathkeys_common(List *keys1, List *keys2); +extern int pathkeys_common(List *keys1, List *keys2); extern Path *get_cheapest_path_for_pathkeys(List *paths, List *pathkeys, Relids required_outer, CostSelector cost_criterion, -- 2.20.1
>From b33b70810d94950f4c3e20fb0fb01e103a25cf11 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera <alvhe...@alvh.no-ip.org> Date: Thu, 12 Mar 2020 18:25:46 -0300 Subject: [PATCH 5/8] no \n after left parens, see c9d297751959 --- src/backend/executor/nodeIncrementalSort.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/src/backend/executor/nodeIncrementalSort.c b/src/backend/executor/nodeIncrementalSort.c index 4f6b438e7b..bbb3f35640 100644 --- a/src/backend/executor/nodeIncrementalSort.c +++ b/src/backend/executor/nodeIncrementalSort.c @@ -135,8 +135,8 @@ preparePresortedCols(IncrementalSortState *node) key = &node->presorted_keys[i]; key->attno = plannode->sort.sortColIdx[i]; - equalityOp = get_equality_op_for_ordering_op( - plannode->sort.sortOperators[i], NULL); + equalityOp = get_equality_op_for_ordering_op(plannode->sort.sortOperators[i], + NULL); if (!OidIsValid(equalityOp)) elog(ERROR, "missing equality operator for ordering operator %u", plannode->sort.sortOperators[i]); @@ -265,8 +265,7 @@ switchToPresortedPrefixMode(PlanState *pstate) * Optimize the sort by assuming the prefix columns are all equal and * thus we only need to sort by any remaining columns. */ - prefixsort_state = tuplesort_begin_heap( - tupDesc, + prefixsort_state = tuplesort_begin_heap(tupDesc, plannode->sort.numCols - presortedCols, &(plannode->sort.sortColIdx[presortedCols]), &(plannode->sort.sortOperators[presortedCols]), @@ -576,8 +575,7 @@ ExecIncrementalSort(PlanState *pstate) * setup the full sort tuplesort to sort by all requested sort * columns. */ - fullsort_state = tuplesort_begin_heap( - tupDesc, + fullsort_state = tuplesort_begin_heap(tupDesc, plannode->sort.numCols, plannode->sort.sortColIdx, plannode->sort.sortOperators, @@ -1011,10 +1009,12 @@ ExecInitIncrementalSort(IncrementalSort *node, EState *estate, int eflags) incrsortstate->ss.ps.ps_ProjInfo = NULL; /* make standalone slot to store previous tuple from outer node */ - incrsortstate->group_pivot = MakeSingleTupleTableSlot( - ExecGetResultType(outerPlanState(incrsortstate)), &TTSOpsMinimalTuple); - incrsortstate->transfer_tuple = MakeSingleTupleTableSlot( - ExecGetResultType(outerPlanState(incrsortstate)), &TTSOpsMinimalTuple); + incrsortstate->group_pivot = + MakeSingleTupleTableSlot(ExecGetResultType(outerPlanState(incrsortstate)), + &TTSOpsMinimalTuple); + incrsortstate->transfer_tuple = + MakeSingleTupleTableSlot(ExecGetResultType(outerPlanState(incrsortstate)), + &TTSOpsMinimalTuple); SO_printf("ExecInitIncrementalSort: sort node initialized\n"); -- 2.20.1
>From be20eb23c85a5e53d7406a21a1e0ca5c5006c2ea Mon Sep 17 00:00:00 2001 From: Alvaro Herrera <alvhe...@alvh.no-ip.org> Date: Thu, 12 Mar 2020 18:29:52 -0300 Subject: [PATCH 6/8] use castNode() instead of Assert(IsA) plus cast --- src/backend/executor/nodeIncrementalSort.c | 26 ++++++++-------------- 1 file changed, 9 insertions(+), 17 deletions(-) diff --git a/src/backend/executor/nodeIncrementalSort.c b/src/backend/executor/nodeIncrementalSort.c index bbb3f35640..be1afbb169 100644 --- a/src/backend/executor/nodeIncrementalSort.c +++ b/src/backend/executor/nodeIncrementalSort.c @@ -115,18 +115,14 @@ instrumentSortedGroup(PlanState *pstate, IncrementalSortGroupInfo *groupInfo, static void preparePresortedCols(IncrementalSortState *node) { - IncrementalSort *plannode = (IncrementalSort *) node->ss.ps.plan; - int presortedCols, - i; + IncrementalSort *plannode = castNode(IncrementalSort, node->ss.ps.plan); - Assert(IsA(plannode, IncrementalSort)); - presortedCols = plannode->presortedCols; - - node->presorted_keys = (PresortedKeyData *) palloc(presortedCols * - sizeof(PresortedKeyData)); + node->presorted_keys = + (PresortedKeyData *) palloc(plannode->presortedCols * + sizeof(PresortedKeyData)); /* Pre-cache comparison functions for each pre-sorted key. */ - for (i = 0; i < presortedCols; i++) + for (int i = 0; i < plannode->presortedCols; i++) { Oid equalityOp, equalityFunc; @@ -162,17 +158,13 @@ preparePresortedCols(IncrementalSortState *node) * * We do this by comparing its first 'presortedCols' column values to * the pivot tuple of the current group. - * */ static bool isCurrentGroup(IncrementalSortState *node, TupleTableSlot *pivot, TupleTableSlot *tuple) { - int presortedCols, - i; + int presortedCols; - Assert(IsA(node->ss.ps.plan, IncrementalSort)); - - presortedCols = ((IncrementalSort *) node->ss.ps.plan)->presortedCols; + presortedCols = castNode(IncrementalSort, node->ss.ps.plan)->presortedCols; /* * That the input is sorted by keys * (0, ... n) implies that the tail @@ -180,7 +172,7 @@ isCurrentGroup(IncrementalSortState *node, TupleTableSlot *pivot, TupleTableSlot * from the last pre-sorted column to optimize for early detection of * inequality and minimizing the number of function calls.. */ - for (i = presortedCols - 1; i >= 0; i--) + for (int i = presortedCols - 1; i >= 0; i--) { Datum datumA, datumB, @@ -250,7 +242,7 @@ switchToPresortedPrefixMode(PlanState *pstate) bool firstTuple = true; TupleDesc tupDesc; PlanState *outerNode; - IncrementalSort *plannode = (IncrementalSort *) node->ss.ps.plan; + IncrementalSort *plannode = castNode(IncrementalSort, node->ss.ps.plan); dir = node->ss.ps.state->es_direction; outerNode = outerPlanState(node); -- 2.20.1
>From 0404936af5b9f2cc420863f505dae6f9085440ad Mon Sep 17 00:00:00 2001 From: Alvaro Herrera <alvhe...@alvh.no-ip.org> Date: Thu, 12 Mar 2020 18:32:53 -0300 Subject: [PATCH 7/8] Test trivial condition before more complex one --- src/backend/executor/nodeIncrementalSort.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/src/backend/executor/nodeIncrementalSort.c b/src/backend/executor/nodeIncrementalSort.c index be1afbb169..909d2df53f 100644 --- a/src/backend/executor/nodeIncrementalSort.c +++ b/src/backend/executor/nodeIncrementalSort.c @@ -499,9 +499,9 @@ ExecIncrementalSort(PlanState *pstate) read_sortstate = node->execution_status == INCSORT_READFULLSORT ? fullsort_state : node->prefixsort_state; slot = node->ss.ps.ps_ResultTupleSlot; - if (tuplesort_gettupleslot(read_sortstate, ScanDirectionIsForward(dir), - false, slot, NULL) || node->finished) - + if (node->finished || + tuplesort_gettupleslot(read_sortstate, ScanDirectionIsForward(dir), + false, slot, NULL)) /* * TODO: there isn't a good test case for the node->finished case * directly, but lots of other stuff fails if it's not there. If -- 2.20.1
>From 04c45e4047ae10cc5706fadd9075d2e7c1ff1411 Mon Sep 17 00:00:00 2001 From: Alvaro Herrera <alvhe...@alvh.no-ip.org> Date: Thu, 12 Mar 2020 18:33:02 -0300 Subject: [PATCH 8/8] reverse arguments .. isn't the other order a bug? --- src/backend/executor/nodeIncrementalSort.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/src/backend/executor/nodeIncrementalSort.c b/src/backend/executor/nodeIncrementalSort.c index 909d2df53f..bb88fca207 100644 --- a/src/backend/executor/nodeIncrementalSort.c +++ b/src/backend/executor/nodeIncrementalSort.c @@ -653,7 +653,7 @@ ExecIncrementalSort(PlanState *pstate) &node->incsort_info.fullsortGroupInfo, fullsort_state); - SO_printf("Setting execution_status to INCSORT_READFULLSORT (final tuple) \n"); + SO_printf("Setting execution_status to INCSORT_READFULLSORT (final tuple)\n"); node->execution_status = INCSORT_READFULLSORT; break; } @@ -713,7 +713,8 @@ ExecIncrementalSort(PlanState *pstate) * configuring sorting bound. */ SO2_printf("Changing bound_Done from %ld to %ld\n", - Min(node->bound, node->bound_Done + nTuples), node->bound_Done); + node->bound_Done, + Min(node->bound, node->bound_Done + nTuples)); node->bound_Done = Min(node->bound, node->bound_Done + nTuples); } @@ -721,7 +722,8 @@ ExecIncrementalSort(PlanState *pstate) * Once we find changed prefix keys we can complete the * sort and begin reading out the sorted tuples. */ - SO1_printf("Sorting fullsort tuplesort with %ld tuples\n", nTuples); + SO1_printf("Sorting fullsort tuplesort with %ld tuples\n", + nTuples); tuplesort_performsort(fullsort_state); if (pstate->instrument != NULL) @@ -882,7 +884,8 @@ ExecIncrementalSort(PlanState *pstate) * in configuring sorting bound. */ SO2_printf("Changing bound_Done from %ld to %ld\n", - Min(node->bound, node->bound_Done + nTuples), node->bound_Done); + node->bound_Done, + Min(node->bound, node->bound_Done + nTuples)); node->bound_Done = Min(node->bound, node->bound_Done + nTuples); } } -- 2.20.1