On Tue, Sep 4, 2018 at 10:14 AM, Amit Kapila <amit.kapil...@gmail.com> wrote: > On Mon, Sep 3, 2018 at 2:44 PM Dilip Kumar <dilipbal...@gmail.com> wrote: >> On Mon, Sep 3, 2018 at 8:37 AM, Amit Kapila <amit.kapil...@gmail.com> wrote: >> > On Sat, Sep 1, 2018 at 10:28 AM Dilip Kumar <dilipbal...@gmail.com> wrote: >> >> >> >> I think if we compute with below formula which I suggested upthread >> >> >> >> #define HASH_MAX_BITMAPS Min(BLCKSZ / 8, 1024) >> >> >> >> then for BLCKSZ 8K and bigger, it will remain the same value where it >> >> does not overrun. And, for the small BLCKSZ, I think it will give >> >> sufficient space for the hash map. If the BLCKSZ is 1K then the sizeof >> >> (HashMetaPageData) + sizeof (HashPageOpaque) = 968 which is very close >> >> to the BLCKSZ. >> >> >> > >> > Yeah, so at 1K, the value of HASH_MAX_BITMAPS will be 128 as per above >> > formula which is what it was its value prior to the commit 620b49a1. >> > I think it will be better if you add a comment in your patch >> > indicating the importance/advantage of such a formula. >> > >> I have added the comments. >> >
In my previous patch mistakenly I put Max(BLCKSZ / 8, 1024) instead of Min(BLCKSZ / 8, 1024). I have fixed the same. > Thanks, I will look into it. Can you please do some pg_upgrade tests > to ensure that this doesn't impact the upgrade? You can create > hash-index and populate it with some data in version 10 and try > upgrading to 11 after applying this patch. You can also try it with > different block-sizes. > Ok, I will do that. -- Regards, Dilip Kumar EnterpriseDB: http://www.enterprisedb.com
hash_overflow_fix_v2.patch
Description: Binary data