On Sun, Apr 28, 2019 at 4:36 AM Peter Geoghegan <p...@bowt.ie> wrote: > On Sat, Apr 27, 2019 at 5:13 PM Alexander Korotkov > <a.korot...@postgrespro.ru> wrote: > > Yes, increasing of Bloom filter size also helps. But my intention was > > to make non-lossy check here. > > Why is that your intention? Do you want to do this as a feature for > Postgres 13, or do you want to treat this as a bug that we need to > backpatch a fix for?
I think this definitely not bug fix. Bloom filter was designed to be lossy, no way blaming it for that :) > Can we avoid the problem you saw with the Bloom filter approach by > using the real size of the index (i.e. > smgrnblocks()/RelationGetNumberOfBlocks()) to size the Bloom filter, > and/or by rethinking the work_mem cap? Maybe we can have a WARNING > that advertises that work_mem is probably too low? > > The state->downlinkfilter Bloom filter should be small in almost all > cases, so I still don't fully understand your concern. With a 100GB > index, we'll have ~13 million blocks. We only need a Bloom filter that > is ~250MB to have less than a 1% chance of missing inconsistencies > even with such a large index. I admit that its unfriendly that users > are not warned about the shortage currently, but that is something we > can probably find a simple (backpatchable) fix for. Sounds reasonable. I'll think about proposing backpatch of something like this. ------ Alexander Korotkov Postgres Professional: http://www.postgrespro.com The Russian Postgres Company