Previous discussions of Hash Joins have noted that the performance decreases when the average number of tuples per bucket increases. O(N^2) effects are seen.
We've argued this about many different ways, yet all of those discussions have centred around the constant NTUP_PER_BUCKET. I believe that was a subtle mistake and I have a proposal. The code in ExecChooseHashTableSize() line 460 says /* * Set nbuckets to achieve an average bucket load of NTUP_PER_BUCKET when * memory is filled. ... but the calculation then sets the number of buckets like this dbuckets = ceil(ntuples / NTUP_PER_BUCKET); **This is doesn't match the comment.** If we underestimate the number of tuples and go on to fill the available memory, we then end up with an average number of tuples per bucket higher than NTUP_PER_BUCKET. A notational confusion that has been skewing the discussion. The correct calculation that would match the objective set out in the comment would be dbuckets = (hash_table_bytes / tupsize) / NTUP_PER_BUCKET; Which leads us to a much more useful value of dbuckets in the case where using ntuples occupies much less space than is available. This value is always same or higher than previously because of the if-test that surrounds it. Given my experience that on larger tables we end up underestimating ndistinct by 10-100-1000 times, I don't think this change is unwarranted. This solves the problem in earlier discussions since we get a lower average number of tuples per bucket and yet we also get to keep the current NTUP_PER_BUCKET value. Everybody wins. Comments? -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services
rethink_num_hash_buckets.v1.patch
Description: Binary data
-- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers