On Fri, 26 Aug 2022 at 16:48, David Rowley <dgrowle...@gmail.com> wrote:
> 0003: Changes writetuple to tell it what it should do in regards to
> freeing and adjusting the memory accounting.
>
> Probably 0003 could be done differently. I'm certainly not set on the
> bool args. I understand that I'm never calling it with "freetup" ==
> true. So other options include 1) rip out the pfree code and that
> parameter; or 2) just do the inlining manually at both call sites.

This patch series needed to be rebased and on looking it at again,
since the pfree() code is never used I felt it makes very little sense
to keep it, so I decided that it might be better just to keep the
WRITETUP macro and just completely get rid of the writetuple function
and have the macro call the function pointed to be the "writetup"
pointer.   The only extra code we needed from writetuple() was the
memory accounting code which was only used in dumptuples(), so I've
just included that code in that function instead.

I also noticed that dumptuples() had a pretty braindead method of
zeroing out state->memtupcount by subtracting 1 from it on each loop.
Since that's not being used to keep track of the loop's progress, I've
just moved it out the loop and changed the code to set it to 0 once
the loop is done.

> I'll throw this in the September CF to see if anyone wants to look.
> There's probably lots more cleaning jobs that could be done in
> tuplesort.c.

My current thoughts are that this is a very trivial patch and unless
there's any objections I plan to push it soon.

David
From 7d9d960c6080f9511ecb2514defed386b9b65cdb Mon Sep 17 00:00:00 2001
From: David Rowley <dgrow...@gmail.com>
Date: Wed, 31 Aug 2022 18:52:11 +1200
Subject: [PATCH v2] Be smarter about freeing tuples during tuplesorts

During dumptuples() the call to writetuple() would pfree any non-null
tuple.  This was quite wasteful as this happens just before we perform a
reset of the context which stores all of those tuples.

It seems to make sense to do a bit of a code refactor to make this work,
so here we just get rid of the writetuple function and adjust the WRITETUP
macro to call the state's writetup function.  The WRITETUP usage in
mergeonerun() always has state->slabAllocatorUsed == true, so writetuple()
would never free the tuple or do any memory accounting.  The only call
path that needs memory accounting done is in dumptuples(), so let's just
do it manually there.

In passing, let's get rid of the state->memtupcount-- code that counts the
memtupcount down to 0 one tuple at a time inside the loop.  That seems to
be a rather inefficient way to set memtupcount to 0, so let's just zero it
after the loop instead.
---
 src/backend/utils/sort/tuplesort.c | 38 ++++++++++++------------------
 1 file changed, 15 insertions(+), 23 deletions(-)

diff --git a/src/backend/utils/sort/tuplesort.c 
b/src/backend/utils/sort/tuplesort.c
index 66950983e6..416f02ba3c 100644
--- a/src/backend/utils/sort/tuplesort.c
+++ b/src/backend/utils/sort/tuplesort.c
@@ -395,7 +395,7 @@ struct Sharedsort
 
 #define REMOVEABBREV(state,stup,count) ((*(state)->base.removeabbrev) (state, 
stup, count))
 #define COMPARETUP(state,a,b)  ((*(state)->base.comparetup) (a, b, state))
-#define WRITETUP(state,tape,stup)      (writetuple(state, tape, stup))
+#define WRITETUP(state,tape,stup)      ((*(state)->base.writetup) (state, 
tape, stup))
 #define READTUP(state,stup,tape,len) ((*(state)->base.readtup) (state, stup, 
tape, len))
 #define FREESTATE(state)       ((state)->base.freestate ? 
(*(state)->base.freestate) (state) : (void) 0)
 #define LACKMEM(state)         ((state)->availMem < 0 && 
!(state)->slabAllocatorUsed)
@@ -453,8 +453,6 @@ struct Sharedsort
 
 
 static void tuplesort_begin_batch(Tuplesortstate *state);
-static void writetuple(Tuplesortstate *state, LogicalTape *tape,
-                                          SortTuple *stup);
 static bool consider_abort_common(Tuplesortstate *state);
 static void inittapes(Tuplesortstate *state, bool mergeruns);
 static void inittapestate(Tuplesortstate *state, int maxTapes);
@@ -1339,24 +1337,6 @@ tuplesort_puttuple_common(Tuplesortstate *state, 
SortTuple *tuple, bool useAbbre
        MemoryContextSwitchTo(oldcontext);
 }
 
-/*
- * Write a stored tuple onto tape.  Unless the slab allocator is
- * used, after writing the tuple, pfree() the out-of-line data (not the
- * SortTuple struct!), and increase state->availMem by the amount of
- * memory space thereby released.
- */
-static void
-writetuple(Tuplesortstate *state, LogicalTape *tape, SortTuple *stup)
-{
-       state->base.writetup(state, tape, stup);
-
-       if (!state->slabAllocatorUsed && stup->tuple)
-       {
-               FREEMEM(state, GetMemoryChunkSpace(stup->tuple));
-               pfree(stup->tuple);
-       }
-}
-
 static bool
 consider_abort_common(Tuplesortstate *state)
 {
@@ -2260,6 +2240,8 @@ mergeonerun(Tuplesortstate *state)
         */
        beginmerge(state);
 
+       Assert(state->slabAllocatorUsed);
+
        /*
         * Execute merge by repeatedly extracting lowest tuple in heap, writing 
it
         * out, and replacing it with next tuple from same tape (if there is
@@ -2418,10 +2400,20 @@ dumptuples(Tuplesortstate *state, bool alltuples)
        memtupwrite = state->memtupcount;
        for (i = 0; i < memtupwrite; i++)
        {
-               WRITETUP(state, state->destTape, &state->memtuples[i]);
-               state->memtupcount--;
+               SortTuple  *stup = &state->memtuples[i];
+
+               WRITETUP(state, state->destTape, stup);
+
+               /*
+                * Account for freeing the tuple, but no need to do the actual 
pfree
+                * since the tuplecontext is being reset after the loop.
+                */
+               if (stup->tuple != NULL)
+                       FREEMEM(state, GetMemoryChunkSpace(stup->tuple));
        }
 
+       state->memtupcount = 0;
+
        /*
         * Reset tuple memory.  We've freed all of the tuples that we previously
         * allocated.  It's important to avoid fragmentation when there is a 
stark
-- 
2.34.1

Reply via email to