Re: Sort performance cliff with small work_mem

2018-05-02 Thread Peter Geoghegan
On Wed, May 2, 2018 at 11:06 AM, Tom Lane wrote: >> -1 from me. What about the case where only some tuples are massive? > > Well, what about it? If there are just a few wide tuples, then the peak > memory consumption will depend on how many of those happen to be in memory > at the same time ... b

Re: Sort performance cliff with small work_mem

2018-05-02 Thread Tom Lane
Peter Geoghegan writes: > On Wed, May 2, 2018 at 10:43 AM, Heikki Linnakangas wrote: >> Independently of this, perhaps we should put in special case in >> dumptuples(), so that it would never create a run with fewer than maxTapes >> tuples. The rationale is that you'll need to hold that many tupl

Re: Sort performance cliff with small work_mem

2018-05-02 Thread Peter Geoghegan
On Wed, May 2, 2018 at 10:43 AM, Heikki Linnakangas wrote: > I'm not sure what you could derive that from, to make it less arbitrary. At > the moment, I'm thinking of just doing something like this: > > /* > * Minimum amount of memory reserved to hold the sorted tuples in > * TSS_BUILDRUNS phase

Re: Sort performance cliff with small work_mem

2018-05-02 Thread Peter Geoghegan
On Wed, May 2, 2018 at 10:43 AM, Heikki Linnakangas wrote: > Independently of this, perhaps we should put in special case in > dumptuples(), so that it would never create a run with fewer than maxTapes > tuples. The rationale is that you'll need to hold that many tuples in memory > during the merg

Re: Sort performance cliff with small work_mem

2018-05-02 Thread Heikki Linnakangas
On 02/05/18 19:41, Tom Lane wrote: Robert Haas writes: On Wed, May 2, 2018 at 11:38 AM, Heikki Linnakangas wrote: To fix, I propose that we change the above so that we always subtract tapeSpace, but if there is less than e.g. 32 kB of memory left after that (including, if it went below 0), th

Re: Sort performance cliff with small work_mem

2018-05-02 Thread Peter Geoghegan
On Wed, May 2, 2018 at 8:38 AM, Heikki Linnakangas wrote: > With a small work_mem values, maxTapes is always 6, so tapeSpace is 48 kB. > With a small enough work_mem, 84 kB or below in this test case, there is not > enough memory left at this point, so we don't subtract tapeSpace. However, > with

Re: Sort performance cliff with small work_mem

2018-05-02 Thread Tom Lane
Robert Haas writes: > On Wed, May 2, 2018 at 11:38 AM, Heikki Linnakangas wrote: >> To fix, I propose that we change the above so that we always subtract >> tapeSpace, but if there is less than e.g. 32 kB of memory left after that >> (including, if it went below 0), then we bump availMem back up

Re: Sort performance cliff with small work_mem

2018-05-02 Thread Peter Geoghegan
On Wed, May 2, 2018 at 8:46 AM, Robert Haas wrote: > On Wed, May 2, 2018 at 11:38 AM, Heikki Linnakangas wrote: >> To fix, I propose that we change the above so that we always subtract >> tapeSpace, but if there is less than e.g. 32 kB of memory left after that >> (including, if it went below 0),

Re: Sort performance cliff with small work_mem

2018-05-02 Thread Robert Haas
On Wed, May 2, 2018 at 11:38 AM, Heikki Linnakangas wrote: > To fix, I propose that we change the above so that we always subtract > tapeSpace, but if there is less than e.g. 32 kB of memory left after that > (including, if it went below 0), then we bump availMem back up to 32 kB. So > we'd always

Sort performance cliff with small work_mem

2018-05-02 Thread Heikki Linnakangas
Hi, I spent some time performance testing sorting, and spotted a funny phenomenon with very small work_mem settings. This example demonstrates it well: I sorted about 1 GB worth of pre-sorted integers, with different settings of work_mem, between 64 kB (the minimum), and 100 kB. I also adde