On Wed, Jan 20, 2021 at 7:03 PM Amit Kapila <amit.kapil...@gmail.com> wrote: > > On Wed, Jan 20, 2021 at 10:58 AM Peter Geoghegan <p...@bowt.ie> wrote: > > > > On Tue, Jan 19, 2021 at 7:54 PM Amit Kapila <amit.kapil...@gmail.com> wrote: > > > The worst cases could be (a) when there is just one such duplicate > > > (indexval logically unchanged) on the page and that happens to be the > > > last item and others are new insertions, (b) same as (a) and along > > > with it lets say there is an open transaction due to which we can't > > > remove even that duplicate version. Have we checked the overhead or > > > results by simulating such workloads? > > > > There is no such thing as a workload that has page splits caused by > > non-HOT updaters, but almost no actual version churn from the same > > non-HOT updaters. It's possible that a small number of individual page > > splits will work out like that, of course, but they'll be extremely > > rare, and impossible to see in any kind of consistent way. > > > > That just leaves long running transactions. Of course it's true that > > eventually a long-running transaction will make it impossible to > > perform any cleanup, for the usual reasons. And at that point this > > mechanism is bound to fail (which costs additional cycles -- the > > wasted access to a single heap page, some CPU cycles). But it's still > > a bargain to try. Even with a long running transactions there will be > > a great many bottom-up deletion passes that still succeed earlier on > > (because at least some of the dups are deletable, and we can still > > delete those that became garbage right before the long running > > snapshot was acquired). > > > > How many ... >
Typo. /many/any -- With Regards, Amit Kapila.