Robert Haas <robertmh...@gmail.com> writes: > On Mon, Jun 9, 2014 at 11:09 AM, Tom Lane <t...@sss.pgh.pa.us> wrote: >> I'm quite prepared to believe that we should change NTUP_PER_BUCKET ... >> but appealing to standard advice isn't a good basis for arguing that. >> Actual performance measurements (in both batched and unbatched cases) >> would be a suitable basis for proposing a change.
> Well, it's all in what scenario you test, right? If you test the case > where something overflows work_mem as a result of the increased size > of the bucket array, it's always going to suck. And if you test the > case where that doesn't happen, it's likely to win. I think Stephen > Frost has already done quite a bit of testing in this area, on > previous threads. But there's no one-size-fits-all solution. I don't really recall any hard numbers being provided. I think if we looked at some results that said "here's the average gain, and here's the worst-case loss, and here's an estimate of how often you'd hit the worst case", then we could make a decision. However, I notice that it's already the case that we make a to-batch-or-not-to-batch decision on the strength of some very crude numbers during ExecChooseHashTableSize, and we explicitly don't consider palloc overhead there. It would certainly be easy enough to use two different NTUP_PER_BUCKET target load factors depending on which path is being taken in ExecChooseHashTableSize. So maybe part of the answer is to not require those numbers to be the same. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers