On 28/02/2025 03:53, Peter Geoghegan wrote:
On Sat, Feb 8, 2025 at 8:47 AM Michail Nikolaev
<michail.nikol...@gmail.com> wrote:
Just some commit messages + few cleanups.

I'm worried about this:

+These longer pin lifetimes can cause buffer exhaustion with messages like "no
+unpinned buffers available" when the index has many pages that have similar
+ordering; but future work can figure out how to best work that out.

I think that we should have some kind of upper bound on the number of
pins that can be acquired at any one time, in order to completely
avoid these problems. Solving that problem will probably require GiST
expertise that I don't have right now.

+1. With no limit, it seems pretty easy to hold thousands of buffer pins with this.

The index can set IndexScanDesc->xs_recheck to indicate that the quals must be rechecked. Perhaps we should have a similar flag to indicate that the visibility must be rechecked.

Matthias's earlier patch (https://www.postgresql.org/message-id/CAEze2Wg1kbpo_Q1%3D9X68JRsgfkyPCk4T0QN%2BqKz10%2BFVzCAoGA%40mail.gmail.com) had a more complicated mechanism to track the pinned buffers. Later patch got rid of that, which simplified things a lot. I wonder if we need something like that, after all.


Here's a completely different line of attack: Instead of holding buffer pins for longer, what if we checked the visibility map earlier? We could check the visibility map already when we construct the GISTSearchHeapItem, and set a flag in IndexScanDesc to tell IndexOnlyNext() that we have already done that. IndexOnlyNext() would have three cases:

1. The index AM has not checked the visibility map. Check it in IndexOnlyNext(), and fetch the tuple if it's not set. This is what it always does today. 2. The index AM has checked the visibility map, and the VM bit was set. IndexOnlyNext() can skip the VM check and use the tuple directly. 3. The index AM has checked the visibility map, and the VM bit was not set. IndexOnlyNext() will fetch the tuple to check its visibility.

--
Heikki Linnakangas
Neon (https://neon.tech)


Reply via email to