On Mon, Jun 18, 2018 at 10:33 PM, Peter Geoghegan <p...@bowt.ie> wrote: > On Mon, Jun 18, 2018 at 7:57 AM, Claudio Freire <klaussfre...@gmail.com> > wrote: >> Way back when I was dabbling in this kind of endeavor, my main idea to >> counteract that, and possibly improve performance overall, was a >> microvacuum kind of thing that would do some on-demand cleanup to >> remove duplicates or make room before page splits. Since nbtree >> uniqueification enables efficient retail deletions, that could end up >> as a net win. > > That sounds like a mechanism that works a bit like > _bt_vacuum_one_page(), which we run at the last second before a page > split. We do this to see if a page split that looks necessary can > actually be avoided. > > I imagine that retail index tuple deletion (the whole point of this > project) would be run by a VACUUM-like process that kills tuples that > are dead to everyone. Even with something like zheap, you cannot just > delete index tuples until you establish that they're truly dead. I > guess that the delete marking stuff that Robert mentioned marks tuples > as dead when the deleting transaction commits. >
No, I don't think that is the case because we want to perform in-place updates for indexed-column-updates. If we won't delete-mark the index tuple before performing in-place update, then we will have two tuples in the index which point to the same heap-TID. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com