Peter Geoghegan <p...@bowt.ie> writes: > On Wed, May 2, 2018 at 10:43 AM, Heikki Linnakangas <hlinn...@iki.fi> wrote: >> Independently of this, perhaps we should put in special case in >> dumptuples(), so that it would never create a run with fewer than maxTapes >> tuples. The rationale is that you'll need to hold that many tuples in memory >> during the merge phase anyway, so it seems silly to bail out before that >> while building the initial runs. You're going to exceed work_mem by the >> roughly same amount anyway, just in a different phase. That's not the case >> in this example, but it might happen when sorting extremely wide tuples.
> -1 from me. What about the case where only some tuples are massive? Well, what about it? If there are just a few wide tuples, then the peak memory consumption will depend on how many of those happen to be in memory at the same time ... but we have zero control over that in the merge phase, so why sweat about it here? I think Heikki's got a good idea about setting a lower bound on the number of tuples we'll hold in memory during run creation. regards, tom lane