On Mon, 2 Dec 2024 at 15:25, Konstantin Knizhnik <knizh...@garret.ru> wrote:
>
> Hi hackers,
>
> Attached script reproduces the problem with incorrect results of `select 
> count(*)` (it returns larger number of records than really available in the 
> table).
> It is not always reproduced, so you may need to repeat it multiple times - at 
> my system it failed 3 times from 10.
>
> The problem takes place with pg16/17/18 (other versions I have not checked).

I suspect all back branches are affected. As I partially also mentioned offline:

The running theory is that bitmap executor nodes incorrectly assume
that the rows contained in the bitmap all are still present in the
index, and thus assume they're allowed to only check the visibility
map to see if the reference contained in the bitmap is visible.
However, this seems incorrect: Note that index AMs must hold at least
pins on the index pages that contain their results when those results
are returned by amgettuple() [0], and that amgetbitmap() doesn't do
that for all TIDs in the bitmap; thus allowing vacuum to remove TIDs
from the index (and later, heap) that are still present in the bitmap
used in the scan.

Concurrency timeline:

Session 1. amgetbitmap() gets snapshot of index contents, containing
references to dead tuples on heap P1.
Session 2. VACUUM runs on the heap, removes TIDs for P1 from the
index, deletes those TIDs from the heap pages, and finally sets heap
pages' VM bits to ALL_VISIBLE, including the now cleaned page P1
Session 1. Executes the bitmap heap scan that uses the bitmap without
checking tuples on P1 due to ALL_VISIBLE and a lack of output columns.

I think this might be an oversight when the feature was originally
committed in 7c70996e (PG11): we don't know when the VM bit was set,
and the bitmap we're scanning may thus be out-of-date (and should've
had TIDs removed it it had been an index), so I propose disabling this
optimization for now, as attached. Patch v1 is against a recent HEAD,
PG17 applies to the 17 branch, and PG16- should work on all (or at
least, most) active backbranches older than PG17's.

Kind regards,

Matthias van de Meent
Neon (https://neon.tech)

PS.
I don't think the optimization itself is completely impossible, and we
can probably re-enable an optimization like that if (or when) we find
a way to reliably keep track of when to stop using the optimization. I
don't think that's an easy fix, though, and definitely not for
backbranches.

The solution I could think to keep most of this optimization requires
the heap bitmap scan to notice that a concurrent process started
removing TIDs from the heap after amgetbitmap was called; i.e.. a
"vacuum generation counter" incremented every time heap starts the
cleanup run. This is quite non-trivial, however, as we don't have much
in place regarding per-relation shared structures which we could put
such a value into, nor a good place to signal changes of the value to
bitmap heap-scanning backends.
From 07739e5af83664b67ea02d3db7a461a106d74040 Mon Sep 17 00:00:00 2001
From: Matthias van de Meent <boekewurm+postg...@gmail.com>
Date: Mon, 2 Dec 2024 15:59:35 +0100
Subject: [PATCH v1] Disable BitmapScan's skip_fetch optimization

The optimization doesn't take concurrent vacuum's removal of TIDs into
account, which can remove dead TIDs and make ALL_VISIBLE pages for which
we have pointers to those dead TIDs in the bitmap.

The optimization itself isn't impossible, but would require more work
figuring out that vacuum started removing TIDs and then stop using the
optimization. However, we currently don't have such a system in place,
making the optimization unsound to keep around.

Reported-By: Konstantin Knizhnik <knizh...@garret.ru>
Backpatch-through: 13
---
 src/include/access/heapam.h               |  9 +++--
 src/include/access/tableam.h              |  3 +-
 src/backend/access/heap/heapam.c          | 13 -------
 src/backend/access/heap/heapam_handler.c  | 28 ---------------
 src/backend/executor/nodeBitmapHeapscan.c | 42 ++---------------------
 5 files changed, 8 insertions(+), 87 deletions(-)

diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 65999dd64e..42f28109e2 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -92,11 +92,10 @@ typedef struct HeapScanDescData
        ParallelBlockTableScanWorkerData *rs_parallelworkerdata;
 
        /*
-        * These fields are only used for bitmap scans for the "skip fetch"
-        * optimization. Bitmap scans needing no fields from the heap may skip
-        * fetching an all visible block, instead using the number of tuples per
-        * block reported by the bitmap to determine how many NULL-filled tuples
-        * to return.
+        * These fields were kept around to guarantee ABI stability, but are
+        * otherwise unused. They were only used for bitmap scans for the
+        * "skip fetch" optimization, which turned out to be unsound when the
+        * second phase of vacuum ran concurrently.
         */
        Buffer          rs_vmbuffer;
        int                     rs_empty_tuples_pending;
diff --git a/src/include/access/tableam.h b/src/include/access/tableam.h
index da661289c1..5d54b0a18b 100644
--- a/src/include/access/tableam.h
+++ b/src/include/access/tableam.h
@@ -955,8 +955,7 @@ table_beginscan_bm(Relation rel, Snapshot snapshot,
 {
        uint32          flags = SO_TYPE_BITMAPSCAN | SO_ALLOW_PAGEMODE;
 
-       if (need_tuple)
-               flags |= SO_NEED_TUPLES;
+       flags |= SO_NEED_TUPLES;
 
        return rel->rd_tableam->scan_begin(rel, snapshot, nkeys, key, NULL, 
flags);
 }
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index bbe64b1e53..ca2dbb0b75 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -1188,19 +1188,6 @@ heap_rescan(TableScanDesc sscan, ScanKey key, bool 
set_params,
        if (BufferIsValid(scan->rs_cbuf))
                ReleaseBuffer(scan->rs_cbuf);
 
-       if (BufferIsValid(scan->rs_vmbuffer))
-       {
-               ReleaseBuffer(scan->rs_vmbuffer);
-               scan->rs_vmbuffer = InvalidBuffer;
-       }
-
-       /*
-        * Reset rs_empty_tuples_pending, a field only used by bitmap heap scan,
-        * to avoid incorrectly emitting NULL-filled tuples from a previous scan
-        * on rescan.
-        */
-       scan->rs_empty_tuples_pending = 0;
-
        /*
         * The read stream is reset on rescan. This must be done before
         * initscan(), as some state referred to by read_stream_reset() is reset
diff --git a/src/backend/access/heap/heapam_handler.c 
b/src/backend/access/heap/heapam_handler.c
index 6f8b1b7929..93980be98a 100644
--- a/src/backend/access/heap/heapam_handler.c
+++ b/src/backend/access/heap/heapam_handler.c
@@ -2131,24 +2131,6 @@ heapam_scan_bitmap_next_block(TableScanDesc scan,
        hscan->rs_cindex = 0;
        hscan->rs_ntuples = 0;
 
-       /*
-        * We can skip fetching the heap page if we don't need any fields from 
the
-        * heap, the bitmap entries don't need rechecking, and all tuples on the
-        * page are visible to our transaction.
-        */
-       if (!(scan->rs_flags & SO_NEED_TUPLES) &&
-               !tbmres->recheck &&
-               VM_ALL_VISIBLE(scan->rs_rd, tbmres->blockno, 
&hscan->rs_vmbuffer))
-       {
-               /* can't be lossy in the skip_fetch case */
-               Assert(tbmres->ntuples >= 0);
-               Assert(hscan->rs_empty_tuples_pending >= 0);
-
-               hscan->rs_empty_tuples_pending += tbmres->ntuples;
-
-               return true;
-       }
-
        /*
         * Ignore any claimed entries past what we think is the end of the
         * relation. It may have been extended after the start of our scan (we
@@ -2261,16 +2243,6 @@ heapam_scan_bitmap_next_tuple(TableScanDesc scan,
        Page            page;
        ItemId          lp;
 
-       if (hscan->rs_empty_tuples_pending > 0)
-       {
-               /*
-                * If we don't have to fetch the tuple, just return nulls.
-                */
-               ExecStoreAllNullTuple(slot);
-               hscan->rs_empty_tuples_pending--;
-               return true;
-       }
-
        /*
         * Out of range?  If so, nothing more to look at on this page
         */
diff --git a/src/backend/executor/nodeBitmapHeapscan.c 
b/src/backend/executor/nodeBitmapHeapscan.c
index 6b48a6d835..82402c0ac0 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -185,24 +185,11 @@ BitmapHeapNext(BitmapHeapScanState *node)
                 */
                if (!scan)
                {
-                       bool            need_tuples = false;
-
-                       /*
-                        * We can potentially skip fetching heap pages if we do 
not need
-                        * any columns of the table, either for checking 
non-indexable
-                        * quals or for returning data.  This test is a bit 
simplistic, as
-                        * it checks the stronger condition that there's no 
qual or return
-                        * tlist at all. But in most cases it's probably not 
worth working
-                        * harder than that.
-                        */
-                       need_tuples = (node->ss.ps.plan->qual != NIL ||
-                                                  node->ss.ps.plan->targetlist 
!= NIL);
-
                        scan = table_beginscan_bm(node->ss.ss_currentRelation,
                                                                          
node->ss.ps.state->es_snapshot,
                                                                          0,
                                                                          NULL,
-                                                                         
need_tuples);
+                                                                         true);
 
                        node->ss.ss_currentScanDesc = scan;
                }
@@ -459,7 +446,6 @@ BitmapPrefetch(BitmapHeapScanState *node, TableScanDesc 
scan)
                        while (node->prefetch_pages < node->prefetch_target)
                        {
                                TBMIterateResult *tbmpre = 
tbm_iterate(prefetch_iterator);
-                               bool            skip_fetch;
 
                                if (tbmpre == NULL)
                                {
@@ -470,20 +456,7 @@ BitmapPrefetch(BitmapHeapScanState *node, TableScanDesc 
scan)
                                }
                                node->prefetch_pages++;
 
-                               /*
-                                * If we expect not to have to actually read 
this heap page,
-                                * skip this prefetch call, but continue to run 
the prefetch
-                                * logic normally.  (Would it be better not to 
increment
-                                * prefetch_pages?)
-                                */
-                               skip_fetch = (!(scan->rs_flags & 
SO_NEED_TUPLES) &&
-                                                         !tbmpre->recheck &&
-                                                         
VM_ALL_VISIBLE(node->ss.ss_currentRelation,
-                                                                               
         tbmpre->blockno,
-                                                                               
         &node->pvmbuffer));
-
-                               if (!skip_fetch)
-                                       PrefetchBuffer(scan->rs_rd, 
MAIN_FORKNUM, tbmpre->blockno);
+                               PrefetchBuffer(scan->rs_rd, MAIN_FORKNUM, 
tbmpre->blockno);
                        }
                }
 
@@ -500,7 +473,6 @@ BitmapPrefetch(BitmapHeapScanState *node, TableScanDesc 
scan)
                        {
                                TBMIterateResult *tbmpre;
                                bool            do_prefetch = false;
-                               bool            skip_fetch;
 
                                /*
                                 * Recheck under the mutex. If some other 
process has already
@@ -526,15 +498,7 @@ BitmapPrefetch(BitmapHeapScanState *node, TableScanDesc 
scan)
                                        break;
                                }
 
-                               /* As above, skip prefetch if we expect not to 
need page */
-                               skip_fetch = (!(scan->rs_flags & 
SO_NEED_TUPLES) &&
-                                                         !tbmpre->recheck &&
-                                                         
VM_ALL_VISIBLE(node->ss.ss_currentRelation,
-                                                                               
         tbmpre->blockno,
-                                                                               
         &node->pvmbuffer));
-
-                               if (!skip_fetch)
-                                       PrefetchBuffer(scan->rs_rd, 
MAIN_FORKNUM, tbmpre->blockno);
+                               PrefetchBuffer(scan->rs_rd, MAIN_FORKNUM, 
tbmpre->blockno);
                        }
                }
        }
-- 
2.45.2

From 43144d7511c93ed153b4ab5b30bf59ea3af20fbd Mon Sep 17 00:00:00 2001
From: Matthias van de Meent <boekewurm+postgres@gmail.com>
Date: Mon, 2 Dec 2024 15:38:14 +0100
Subject: [PATCH v1] Remove BitmapScan's skip_fetch optimization

The optimization doesn't take concurrent vacuum's removal of TIDs into
account, which can remove dead TIDs and make ALL_VISIBLE pages for which
we have pointers to those dead TIDs in the bitmap.

The optimization itself isn't impossible, but would require more work
figuring out that vacuum started removing TIDs and then stop using the
optimization. However, we currently don't have such a system in place,
making the optimization unsound to keep around.

Reported-By: Konstantin Knizhnik <knizhnik@garret.ru>
Backpatch-through: 13
---
 src/include/access/heapam.h               |  1 -
 src/include/access/tableam.h              | 12 +------
 src/backend/access/heap/heapam.c          | 18 ----------
 src/backend/access/heap/heapam_handler.c  | 28 ---------------
 src/backend/executor/nodeBitmapHeapscan.c | 43 ++---------------------
 5 files changed, 4 insertions(+), 98 deletions(-)

diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h
index 96cf82f97b..6dd233dc17 100644
--- a/src/include/access/heapam.h
+++ b/src/include/access/heapam.h
@@ -99,7 +99,6 @@ typedef struct HeapScanDescData
 	 * block reported by the bitmap to determine how many NULL-filled tuples
 	 * to return.
 	 */
-	Buffer		rs_vmbuffer;
 	int			rs_empty_tuples_pending;
 
 	/* these fields only used in page-at-a-time mode and for bitmap scans */
diff --git a/src/include/access/tableam.h b/src/include/access/tableam.h
index adb478a93c..283a19babe 100644
--- a/src/include/access/tableam.h
+++ b/src/include/access/tableam.h
@@ -62,13 +62,6 @@ typedef enum ScanOptions
 
 	/* unregister snapshot at scan end? */
 	SO_TEMP_SNAPSHOT = 1 << 9,
-
-	/*
-	 * At the discretion of the table AM, bitmap table scans may be able to
-	 * skip fetching a block from the table if none of the table data is
-	 * needed. If table data may be needed, set SO_NEED_TUPLES.
-	 */
-	SO_NEED_TUPLES = 1 << 10,
 }			ScanOptions;
 
 /*
@@ -955,14 +948,11 @@ table_beginscan_strat(Relation rel, Snapshot snapshot,
  */
 static inline TableScanDesc
 table_beginscan_bm(Relation rel, Snapshot snapshot,
-				   int nkeys, struct ScanKeyData *key, bool need_tuple)
+				   int nkeys, struct ScanKeyData *key)
 {
 	TableScanDesc result;
 	uint32		flags = SO_TYPE_BITMAPSCAN | SO_ALLOW_PAGEMODE;
 
-	if (need_tuple)
-		flags |= SO_NEED_TUPLES;
-
 	result = rel->rd_tableam->scan_begin(rel, snapshot, nkeys, key,
 										 NULL, flags);
 	result->st.bitmap.rs_shared_iterator = NULL;
diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
index d00300c5dc..9ea9dec863 100644
--- a/src/backend/access/heap/heapam.c
+++ b/src/backend/access/heap/heapam.c
@@ -1053,8 +1053,6 @@ heap_beginscan(Relation relation, Snapshot snapshot,
 	scan->rs_base.rs_flags = flags;
 	scan->rs_base.rs_parallel = parallel_scan;
 	scan->rs_strategy = NULL;	/* set in initscan */
-	scan->rs_vmbuffer = InvalidBuffer;
-	scan->rs_empty_tuples_pending = 0;
 
 	/*
 	 * Disable page-at-a-time mode if it's not a MVCC-safe snapshot.
@@ -1170,19 +1168,6 @@ heap_rescan(TableScanDesc sscan, ScanKey key, bool set_params,
 	if (BufferIsValid(scan->rs_cbuf))
 		ReleaseBuffer(scan->rs_cbuf);
 
-	if (BufferIsValid(scan->rs_vmbuffer))
-	{
-		ReleaseBuffer(scan->rs_vmbuffer);
-		scan->rs_vmbuffer = InvalidBuffer;
-	}
-
-	/*
-	 * Reset rs_empty_tuples_pending, a field only used by bitmap heap scan,
-	 * to avoid incorrectly emitting NULL-filled tuples from a previous scan
-	 * on rescan.
-	 */
-	scan->rs_empty_tuples_pending = 0;
-
 	/*
 	 * The read stream is reset on rescan. This must be done before
 	 * initscan(), as some state referred to by read_stream_reset() is reset
@@ -1210,9 +1195,6 @@ heap_endscan(TableScanDesc sscan)
 	if (BufferIsValid(scan->rs_cbuf))
 		ReleaseBuffer(scan->rs_cbuf);
 
-	if (BufferIsValid(scan->rs_vmbuffer))
-		ReleaseBuffer(scan->rs_vmbuffer);
-
 	/*
 	 * Must free the read stream before freeing the BufferAccessStrategy.
 	 */
diff --git a/src/backend/access/heap/heapam_handler.c b/src/backend/access/heap/heapam_handler.c
index a8d95e0f1c..b432eb8140 100644
--- a/src/backend/access/heap/heapam_handler.c
+++ b/src/backend/access/heap/heapam_handler.c
@@ -2158,24 +2158,6 @@ heapam_scan_bitmap_next_block(TableScanDesc scan,
 	*blockno = tbmres->blockno;
 	*recheck = tbmres->recheck;
 
-	/*
-	 * We can skip fetching the heap page if we don't need any fields from the
-	 * heap, the bitmap entries don't need rechecking, and all tuples on the
-	 * page are visible to our transaction.
-	 */
-	if (!(scan->rs_flags & SO_NEED_TUPLES) &&
-		!tbmres->recheck &&
-		VM_ALL_VISIBLE(scan->rs_rd, tbmres->blockno, &hscan->rs_vmbuffer))
-	{
-		/* can't be lossy in the skip_fetch case */
-		Assert(tbmres->ntuples >= 0);
-		Assert(hscan->rs_empty_tuples_pending >= 0);
-
-		hscan->rs_empty_tuples_pending += tbmres->ntuples;
-
-		return true;
-	}
-
 	block = tbmres->blockno;
 
 	/*
@@ -2290,16 +2272,6 @@ heapam_scan_bitmap_next_tuple(TableScanDesc scan,
 	Page		page;
 	ItemId		lp;
 
-	if (hscan->rs_empty_tuples_pending > 0)
-	{
-		/*
-		 * If we don't have to fetch the tuple, just return nulls.
-		 */
-		ExecStoreAllNullTuple(slot);
-		hscan->rs_empty_tuples_pending--;
-		return true;
-	}
-
 	/*
 	 * Out of range?  If so, nothing more to look at on this page
 	 */
diff --git a/src/backend/executor/nodeBitmapHeapscan.c b/src/backend/executor/nodeBitmapHeapscan.c
index 689bde16dd..9bf01bb0c3 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -176,24 +176,10 @@ BitmapHeapNext(BitmapHeapScanState *node)
 		 */
 		if (!scan)
 		{
-			bool		need_tuples = false;
-
-			/*
-			 * We can potentially skip fetching heap pages if we do not need
-			 * any columns of the table, either for checking non-indexable
-			 * quals or for returning data.  This test is a bit simplistic, as
-			 * it checks the stronger condition that there's no qual or return
-			 * tlist at all. But in most cases it's probably not worth working
-			 * harder than that.
-			 */
-			need_tuples = (node->ss.ps.plan->qual != NIL ||
-						   node->ss.ps.plan->targetlist != NIL);
-
 			scan = table_beginscan_bm(node->ss.ss_currentRelation,
 									  node->ss.ps.state->es_snapshot,
 									  0,
-									  NULL,
-									  need_tuples);
+									  NULL);
 
 			node->ss.ss_currentScanDesc = scan;
 		}
@@ -453,7 +439,6 @@ BitmapPrefetch(BitmapHeapScanState *node, TableScanDesc scan)
 			while (node->prefetch_pages < node->prefetch_target)
 			{
 				TBMIterateResult *tbmpre = tbm_iterate(prefetch_iterator);
-				bool		skip_fetch;
 
 				if (tbmpre == NULL)
 				{
@@ -465,20 +450,7 @@ BitmapPrefetch(BitmapHeapScanState *node, TableScanDesc scan)
 				node->prefetch_pages++;
 				node->prefetch_blockno = tbmpre->blockno;
 
-				/*
-				 * If we expect not to have to actually read this heap page,
-				 * skip this prefetch call, but continue to run the prefetch
-				 * logic normally.  (Would it be better not to increment
-				 * prefetch_pages?)
-				 */
-				skip_fetch = (!(scan->rs_flags & SO_NEED_TUPLES) &&
-							  !tbmpre->recheck &&
-							  VM_ALL_VISIBLE(node->ss.ss_currentRelation,
-											 tbmpre->blockno,
-											 &node->pvmbuffer));
-
-				if (!skip_fetch)
-					PrefetchBuffer(scan->rs_rd, MAIN_FORKNUM, tbmpre->blockno);
+				PrefetchBuffer(scan->rs_rd, MAIN_FORKNUM, tbmpre->blockno);
 			}
 		}
 
@@ -495,7 +467,6 @@ BitmapPrefetch(BitmapHeapScanState *node, TableScanDesc scan)
 			{
 				TBMIterateResult *tbmpre;
 				bool		do_prefetch = false;
-				bool		skip_fetch;
 
 				/*
 				 * Recheck under the mutex. If some other process has already
@@ -523,15 +494,7 @@ BitmapPrefetch(BitmapHeapScanState *node, TableScanDesc scan)
 
 				node->prefetch_blockno = tbmpre->blockno;
 
-				/* As above, skip prefetch if we expect not to need page */
-				skip_fetch = (!(scan->rs_flags & SO_NEED_TUPLES) &&
-							  !tbmpre->recheck &&
-							  VM_ALL_VISIBLE(node->ss.ss_currentRelation,
-											 tbmpre->blockno,
-											 &node->pvmbuffer));
-
-				if (!skip_fetch)
-					PrefetchBuffer(scan->rs_rd, MAIN_FORKNUM, tbmpre->blockno);
+				PrefetchBuffer(scan->rs_rd, MAIN_FORKNUM, tbmpre->blockno);
 			}
 		}
 	}
-- 
2.45.2

From eed8c4b613b5c44113e55bc6917ef3564d4632f8 Mon Sep 17 00:00:00 2001
From: Matthias van de Meent <boekewurm+postg...@gmail.com>
Date: Mon, 2 Dec 2024 16:05:09 +0100
Subject: [PATCH v1] Disable BitmapScan's skip_fetch optimization

The optimization doesn't take concurrent vacuum's removal of TIDs into
account, which can remove dead TIDs and make ALL_VISIBLE pages for which
we still have pointers to those dead TIDs in the bitmap.

The optimization itself isn't impossible, but would require more work
figuring out that vacuum started removing TIDs and then stop using the
optimization. However, we currently don't have such a system in place,
making the optimization unsound to keep around.

Reported-By: Konstantin Knizhnik <knizh...@garret.ru>
Backpatch-through: 13
---
 src/backend/executor/nodeBitmapHeapscan.c | 64 ++---------------------
 1 file changed, 4 insertions(+), 60 deletions(-)

diff --git a/src/backend/executor/nodeBitmapHeapscan.c 
b/src/backend/executor/nodeBitmapHeapscan.c
index 1cf0bbddd4..214c129472 100644
--- a/src/backend/executor/nodeBitmapHeapscan.c
+++ b/src/backend/executor/nodeBitmapHeapscan.c
@@ -186,8 +186,6 @@ BitmapHeapNext(BitmapHeapScanState *node)
 
        for (;;)
        {
-               bool            skip_fetch;
-
                CHECK_FOR_INTERRUPTS();
 
                /*
@@ -212,32 +210,7 @@ BitmapHeapNext(BitmapHeapScanState *node)
                        else
                                node->lossy_pages++;
 
-                       /*
-                        * We can skip fetching the heap page if we don't need 
any fields
-                        * from the heap, and the bitmap entries don't need 
rechecking,
-                        * and all tuples on the page are visible to our 
transaction.
-                        *
-                        * XXX: It's a layering violation that we do these 
checks above
-                        * tableam, they should probably moved below it at some 
point.
-                        */
-                       skip_fetch = (node->can_skip_fetch &&
-                                                 !tbmres->recheck &&
-                                                 
VM_ALL_VISIBLE(node->ss.ss_currentRelation,
-                                                                               
 tbmres->blockno,
-                                                                               
 &node->vmbuffer));
-
-                       if (skip_fetch)
-                       {
-                               /* can't be lossy in the skip_fetch case */
-                               Assert(tbmres->ntuples >= 0);
-
-                               /*
-                                * The number of tuples on this page is put into
-                                * node->return_empty_tuples.
-                                */
-                               node->return_empty_tuples = tbmres->ntuples;
-                       }
-                       else if (!table_scan_bitmap_next_block(scan, tbmres))
+                       if (!table_scan_bitmap_next_block(scan, tbmres))
                        {
                                /* AM doesn't think this block is valid, skip */
                                continue;
@@ -475,7 +448,6 @@ BitmapPrefetch(BitmapHeapScanState *node, TableScanDesc 
scan)
                        while (node->prefetch_pages < node->prefetch_target)
                        {
                                TBMIterateResult *tbmpre = 
tbm_iterate(prefetch_iterator);
-                               bool            skip_fetch;
 
                                if (tbmpre == NULL)
                                {
@@ -486,25 +458,7 @@ BitmapPrefetch(BitmapHeapScanState *node, TableScanDesc 
scan)
                                }
                                node->prefetch_pages++;
 
-                               /*
-                                * If we expect not to have to actually read 
this heap page,
-                                * skip this prefetch call, but continue to run 
the prefetch
-                                * logic normally.  (Would it be better not to 
increment
-                                * prefetch_pages?)
-                                *
-                                * This depends on the assumption that the 
index AM will
-                                * report the same recheck flag for this future 
heap page as
-                                * it did for the current heap page; which is 
not a certainty
-                                * but is true in many cases.
-                                */
-                               skip_fetch = (node->can_skip_fetch &&
-                                                         (node->tbmres ? 
!node->tbmres->recheck : false) &&
-                                                         
VM_ALL_VISIBLE(node->ss.ss_currentRelation,
-                                                                               
         tbmpre->blockno,
-                                                                               
         &node->pvmbuffer));
-
-                               if (!skip_fetch)
-                                       PrefetchBuffer(scan->rs_rd, 
MAIN_FORKNUM, tbmpre->blockno);
+                               PrefetchBuffer(scan->rs_rd, MAIN_FORKNUM, 
tbmpre->blockno);
                        }
                }
 
@@ -521,7 +475,6 @@ BitmapPrefetch(BitmapHeapScanState *node, TableScanDesc 
scan)
                        {
                                TBMIterateResult *tbmpre;
                                bool            do_prefetch = false;
-                               bool            skip_fetch;
 
                                /*
                                 * Recheck under the mutex. If some other 
process has already
@@ -547,15 +500,7 @@ BitmapPrefetch(BitmapHeapScanState *node, TableScanDesc 
scan)
                                        break;
                                }
 
-                               /* As above, skip prefetch if we expect not to 
need page */
-                               skip_fetch = (node->can_skip_fetch &&
-                                                         (node->tbmres ? 
!node->tbmres->recheck : false) &&
-                                                         
VM_ALL_VISIBLE(node->ss.ss_currentRelation,
-                                                                               
         tbmpre->blockno,
-                                                                               
         &node->pvmbuffer));
-
-                               if (!skip_fetch)
-                                       PrefetchBuffer(scan->rs_rd, 
MAIN_FORKNUM, tbmpre->blockno);
+                               PrefetchBuffer(scan->rs_rd, MAIN_FORKNUM, 
tbmpre->blockno);
                        }
                }
        }
@@ -749,8 +694,7 @@ ExecInitBitmapHeapScan(BitmapHeapScan *node, EState 
*estate, int eflags)
         * stronger condition that there's no qual or return tlist at all.  But 
in
         * most cases it's probably not worth working harder than that.
         */
-       scanstate->can_skip_fetch = (node->scan.plan.qual == NIL &&
-                                                                
node->scan.plan.targetlist == NIL);
+       scanstate->can_skip_fetch = false;
 
        /*
         * Miscellaneous initialization
-- 
2.45.2

Reply via email to