Hi hackers, It seems the get_actual_variable_range function has a long history of fixes attempting to improve its worst-case behaviour, most recently in 9c6ad5eaa95, which limited the number of heap page fetches to 100. There's currently no limit on the number of index pages fetched.
We managed to get into trouble after deleting a large number of rows (~8 million) from the start of an index, which caused planning time to blow up on a hot (~250 calls/s) query. During the incident `explain (analyze, buffers)` looked like this: Planning: Buffers: shared hit=88311 Planning Time: 249.902 ms Execution Time: 0.066 ms The planner was burning a huge amount of CPU time looking through index pages for the first visible tuple. The problem eventually resolved when the affected index was vacuumed, but that took several hours to complete. There's a reproduction with a smaller dataset below. Our current workaround to safely bulk delete from these large tables involves delaying deletion of the minimum row until after a vacuum has run, so there's always a visible tuple near the start of the index. It's not realistic for us to run vacuums more frequently (ie. after deleting a smaller number of rows) because they're so time-consuming. The previous discussion [1] touched on the idea of also limiting the number of index page fetches, but there were doubts about the safety of back-patching and the ugliness of modifying the index AM API to support this. I would like to submit our experience as evidence that the lack of limit on index page fetches is a real problem. Even if a fix for this doesn't get back-patched, it would be nice to see it in a major version. As a starting point, I've updated the WIP index page limit patch from Simon Riggs [2] to apply cleanly to master. Regards, Rian [1] https://www.postgresql.org/message-id/flat/CAKZiRmznOwi0oaV%3D4PHOCM4ygcH4MgSvt8%3D5cu_vNCfc8FSUug%40mail.gmail.com [2] https://www.postgresql.org/message-id/CANbhV-GUAo5cOw6XiqBjsLVBQsg%2B%3DkpcCCWYjdTyWzLP28ZX-Q%40mail.gmail.com =# create table test (id bigint primary key) with (autovacuum_enabled = 'off'); =# insert into test select generate_series(1,10000000); =# analyze test; An explain with no dead tuples looks like this: =# explain (analyze, buffers) select id from test where id in (select id from test order by id desc limit 1); Planning: Buffers: shared hit=8 Planning Time: 0.244 ms Execution Time: 0.067 ms But if we delete a large number of rows from the start of the index: =# delete from test where id <= 4000000; The performance doesn't become unreasonable immediately - it's limited to visiting 100 heap pages while it's setting killed bits on the index tuples: =# explain (analyze, buffers) select id from test where id in (select id from test order by id desc limit 1); Planning: Buffers: shared hit=1 read=168 dirtied=163 Planning Time: 5.910 ms Execution Time: 0.107 ms But the number of index buffers visited increases on each query, and eventually all the killed bits are set: $ for i in {1..500}; do psql test -c 'select id from test where id in (select id from test order by id desc limit 1)' >/dev/null; done =# explain (analyze, buffers) select id from test where id in (select id from test order by id desc limit 1); Planning: Buffers: shared hit=11015 Planning Time: 35.772 ms Execution Time: 0.070 ms With the patch: =# explain (analyze, buffers) select id from test where id in (select id from test order by id desc limit 1); Planning: Buffers: shared hit=107 Planning Time: 0.377 ms Execution Time: 0.045 ms
diff --git a/src/backend/access/index/indexam.c b/src/backend/access/index/indexam.c index dcd04b813d..8d97a5b0c1 100644 --- a/src/backend/access/index/indexam.c +++ b/src/backend/access/index/indexam.c @@ -333,6 +333,9 @@ index_beginscan_internal(Relation indexRelation, scan->parallel_scan = pscan; scan->xs_temp_snap = temp_snap; + scan->xs_page_limit = 0; + scan->xs_pages_visited = 0; + return scan; } @@ -366,6 +369,9 @@ index_rescan(IndexScanDesc scan, scan->kill_prior_tuple = false; /* for safety */ scan->xs_heap_continue = false; + scan->xs_page_limit = 0; + scan->xs_pages_visited = 0; + scan->indexRelation->rd_indam->amrescan(scan, keys, nkeys, orderbys, norderbys); } diff --git a/src/backend/access/nbtree/nbtsearch.c b/src/backend/access/nbtree/nbtsearch.c index 57bcfc7e4c..4d8b8f1c83 100644 --- a/src/backend/access/nbtree/nbtsearch.c +++ b/src/backend/access/nbtree/nbtsearch.c @@ -2189,6 +2189,11 @@ _bt_readnextpage(IndexScanDesc scan, BlockNumber blkno, ScanDirection dir) BTScanPosInvalidate(so->currPos); return false; } + if (unlikely(scan->xs_page_limit > 0) && ++scan->xs_pages_visited > scan->xs_page_limit) + { + BTScanPosInvalidate(so->currPos); + return false; + } /* check for interrupts while we're not holding any buffer lock */ CHECK_FOR_INTERRUPTS(); /* step right one page */ diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c index 5f5d7959d8..b68dfc77f4 100644 --- a/src/backend/utils/adt/selfuncs.c +++ b/src/backend/utils/adt/selfuncs.c @@ -6339,6 +6339,10 @@ get_actual_variable_endpoint(Relation heapRel, index_scan->xs_want_itup = true; index_rescan(index_scan, scankeys, 1, NULL, 0); + /* Don't index scan forever; correctness is not the issue here */ +#define VISITED_PAGES_LIMIT 100 + index_scan->xs_page_limit = VISITED_PAGES_LIMIT; + /* Fetch first/next tuple in specified direction */ while ((tid = index_getnext_tid(index_scan, indexscandir)) != NULL) { @@ -6361,7 +6365,6 @@ get_actual_variable_endpoint(Relation heapRel, * since other recently-accessed pages are probably still in * buffers too; but it's good enough for this heuristic. */ -#define VISITED_PAGES_LIMIT 100 if (block != last_heap_block) { diff --git a/src/include/access/relscan.h b/src/include/access/relscan.h index 521043304a..9dda821e0a 100644 --- a/src/include/access/relscan.h +++ b/src/include/access/relscan.h @@ -124,6 +124,9 @@ typedef struct IndexScanDescData bool xs_want_itup; /* caller requests index tuples */ bool xs_temp_snap; /* unregister snapshot at scan end? */ + uint32 xs_page_limit; /* limit on num pages in scan, or 0=no limit */ + uint32 xs_pages_visited; /* current num pages visited in scan */ + /* signaling to index AM about killing index tuples */ bool kill_prior_tuple; /* last-returned tuple is dead */ bool ignore_killed_tuples; /* do not return killed entries */