On 2024-11-21 04:40, Peter Geoghegan wrote:
On Wed, Nov 20, 2024 at 4:04 AM Masahiro Ikeda
<ikeda...@oss.nttdata.com> wrote:
Thanks for your quick response!
Attached is v16. This is similar to v15, but the new
v16-0003-Fix-regressions* patch to fix the regressions is much less
buggy, and easier to understand.
Unlike v15, the experimental patch in v16 doesn't change anything
about which index pages are read by the scan -- not even in corner
cases. It is 100% limited to fixing the CPU overhead of maintaining
skip arrays uselessly *within* a leaf page. My extensive test suite
passes; it no longer shows any changes in "Buffers: N" for any of the
EXPLAIN (ANALYZE, BUFFERS) ... output that the tests look at. This is
what I'd expect.
I think that it will make sense to commit this patch as a separate
commit, immediately after skip scan itself is committed. It makes it
clear that, at least in theory, the new v16-0003-Fix-regressions*
patch doesn't change any behavior that's visible to code outside of
_bt_readpage/_bt_checkkeys/_bt_advance_array_keys.
Thanks for the update! I'll look into the details and understand the
approach to the commit.
I didn't come up with the idea. At first glance, your idea seems good
for all cases.
My approach of conditioning the new "beyondskip" behavior on
"has_skip_array && beyond_end_advance" is at least a good start.
The idea behind conditioning this behavior on having at least one
beyond_end_advance array advancement is pretty simple: in practice
that almost never happens during skip scans that actually end up
skipping (either via another _bt_first that redesends the index, or
via skipping "within the page" using the
_bt_checkkeys_look_ahead/pstate->skip mechanism). So that definitely
seems like a good general heuristic. It just isn't sufficient on its
own, as you have shown.
Yes, I think so. This idea can make the worst-case execution time of a
skip scan almost the same as that of a full index scan.
Actually, test.sql shows a performance improvement, and the
performance
is almost the same as the master's seqscan. To be precise, the
master's
performance is 10-20% better than the v15 patch because the seqscan is
executed in parallel. However, the v15 patch is twice as fast as when
seqscan is not executed in parallel.
I think that that's a good result, overall.
Bear in mind that a case such as this might receive a big performance
benefit if it can skip only once or twice. It's almost impossible to
model those kinds of effects within the optimizer's cost model, but
they're still important effects.
FWIW, I notice that your "t" test table is 35 MB, whereas its t_idx
index is 21 MB. That's not very realistic (the index size is usually a
smaller fraction of the table size than we see here), which probably
partly explains why the planner likes parallel sequential scan for
this.
Yes, I agree that the above case is not realistic. If anything, the
purpose of this case might be to simply find regression scenarios.
One possible use case I can think of is enforcing a unique constraint
on all columns. However, such cases are probably very rare.
I have an experimental fix in mind for this case. One not-very-good
way to fix this new problem seems to work:
diff --git a/src/backend/access/nbtree/nbtutils.c
b/src/backend/access/nbtree/nbtutils.c
index b70b58e0c..ddae5f2a1 100644
--- a/src/backend/access/nbtree/nbtutils.c
+++ b/src/backend/access/nbtree/nbtutils.c
@@ -3640,7 +3640,7 @@ _bt_advance_array_keys(IndexScanDesc scan,
BTReadPageState *pstate,
* for skip scan, and stop maintaining the scan's skip arrays
until we
* reach the page's finaltup, if any.
*/
- if (has_skip_array && beyond_end_advance &&
+ if (has_skip_array && !all_required_satisfied &&
!has_required_opposite_direction_skip && pstate->finaltup)
pstate->beyondskip = true;
However, a small number of my test cases now fail. And (I assume) this
approach has certain downsides on leaf pages where we're now too quick
to stop maintaining skip arrays.
Since I've built with the above change and executed make check, I found
that there is an assertion error, which may not be related to what you
pointed
out.
* the reproducible simple query (you can see the original query in
opr_sanity.sql).
select * from pg_proc
where proname in (
'lo_lseek64',
'lo_truncate',
'lo_truncate64')
and pronamespace = 11;
* the assertion error
TRAP: failed Assert("sktrig_required && required"), File:
"nbtutils.c", Line: 3375, PID: 362411
While investigating the error, I thought we might need to consider
whether key->sk_flags does not have SK_BT_SKIP. The assertion error
occurs because
requiredSameDir doesn't become true since proname does not have
SK_BT_SKIP.
+ if (beyondskip)
+ {
+ /*
+ * "Beyond end advancement" skip scan optimization.
+ *
+ * Just skip over any skip array scan keys. Treat all
other scan
+ * keys as not required for the scan to continue.
+ */
+ Assert(!prechecked);
+
+ if (key->sk_flags & SK_BT_SKIP)
+ continue;
+ }
+ else if (((key->sk_flags & SK_BT_REQFWD) &&
ScanDirectionIsForward(dir)) ||
+ ((key->sk_flags & SK_BT_REQBKWD) &&
ScanDirectionIsBackward(dir)))
requiredSameDir = true;
What I really need to do next is to provide a vigorous argument for
why the new pstate->beyondskip behavior is correct. I'm already
imposing restrictions on range skip arrays in v16 of the patch --
that's what the "!has_required_opposite_direction_skip" portion of the
test is about. But it still feels too ad-hoc.
I'm a little worried that these restrictions on range skip arrays will
themselves be the problem for some other kind of query. Imagine a
query like this:
SELECT * FROM t WHERE id1 BETWEEN 0 AND 1_000_000 AND id2 = 1
This is probably going to be regressed due to the aforementioned
"!has_required_opposite_direction_skip" restriction. Right now I don't
fully understand what restrictions are truly necessary, though. More
research is needed.
I think for v17 I'll properly fix all of the regressions that you've
complained about so far, including the most recent "SELECT * FROM t
WHERE id2 = 1_000_000" regression. Hopefully the best fix for this
other "WHERE id1 BETWEEN 0 AND 1_000_000 AND id2 = 1" regression will
become clearer once I get that far. What do you think?
To be honest, I don't fully understand
has_required_opposite_direction_skip
and its context at the moment. Please give me some time to understand
it,
and I'd like to provide feedback afterward.
FWIW, the optimization is especially important for types that don't
support
'skipsupport', like 'real'. Although the example case I provided uses
integer
types, they (like 'real') tend to have many different values and high
cardinality, which means the possibility of skip scan working
efficiently
can be low.
There may be a better way, such as the new idea you suggested, and I
think there
is room for discussion regarding how far we should go in handling
regressions,
regardless of whether we choose to accept regressions or sacrifice the
benefits of
skip scan to address them.
There are definitely lots more options to address these regressions.
For example, we could have the planner hint that it thinks that skip
scan won't be a good idea, without that actually changing the basic
choices that nbtree makes about which pages it needs to scan (only how
to scan each individual leaf page). Or, we could remember that the
previous page used "pstate-> beyondskip" each time _bt_readpage reads
another page. I could probably think of 2 or 3 more ideas like that,
if I had to.
However, the problem is not a lack of ideas IMV. The important
trade-off is likely to be the trade-off between how effectively we can
avoid these regressions versus how much complexity each approach
imposes. My guess is that complexity is more likely to impose limits
on us than overall feasibility.
OK, I think you're right.
Regards,
--
Masahiro Ikeda
NTT DATA CORPORATION