On Tue, Aug 28, 2018 at 8:02 PM, Tom Lane <t...@sss.pgh.pa.us> wrote: > I think this argument is a red herring TBH. The example Robert shows is > of *zero* interest for dynahash or catcache, unless it's taking only the > low order 3 bits of the OID for the bucket number. But actually we'll > increase the table size proportionally to the number of entries, so > that you can't have say 1000 table entries without at least 10 bits > being used for the bucket number. That means that you'd only have > trouble if those 1000 tables all had OIDs exactly 1K (or some multiple > of that) apart. Such a case sounds quite contrived from here.
Hmm. I was thinking that it was a problem if the number of OIDs consumed per table was a FACTOR of 1000, not just if it was a POWER of 1000. I mean, if it's, say, 4, that means three-quarters of your hash table buckets are unused, which seems poor. But maybe it's not really a big enough problem in practice for us to care? Dunno. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company