On Mon, Sep 3, 2018 at 8:37 AM, Amit Kapila <amit.kapil...@gmail.com> wrote: > On Sat, Sep 1, 2018 at 10:28 AM Dilip Kumar <dilipbal...@gmail.com> wrote: >> >> On Sat, Sep 1, 2018 at 8:22 AM, Robert Haas <robertmh...@gmail.com> wrote: >> > On Thu, Aug 30, 2018 at 7:27 AM, Amit Kapila <amit.kapil...@gmail.com> >> > wrote: >> > >> > I wouldn't bother bumping HASH_VERSION. First, the fix needs to be >> > back-patched, and you certainly can't back-patch a HASH_VERSION bump. >> > Second, you should just pick a formula that gives the same answer as >> > now for the cases where the overrun doesn't occur, and some other >> > sufficiently-value for the cases where an overrun currently does >> > occur. If you do that, you're not changing the behavior in any case >> > that currently works, so there's really no reason for a version bump. >> > It just becomes a bug fix at that point. >> > > > makes sense. > >> >> I think if we compute with below formula which I suggested upthread >> >> #define HASH_MAX_BITMAPS Min(BLCKSZ / 8, 1024) >> >> then for BLCKSZ 8K and bigger, it will remain the same value where it >> does not overrun. And, for the small BLCKSZ, I think it will give >> sufficient space for the hash map. If the BLCKSZ is 1K then the sizeof >> (HashMetaPageData) + sizeof (HashPageOpaque) = 968 which is very close >> to the BLCKSZ. >> > > Yeah, so at 1K, the value of HASH_MAX_BITMAPS will be 128 as per above > formula which is what it was its value prior to the commit 620b49a1. > I think it will be better if you add a comment in your patch > indicating the importance/advantage of such a formula. > I have added the comments.
-- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com
hash_overflow_fix_v1.patch
Description: Binary data