On Mon, Jun 9, 2025 at 10:47 AM Peter Geoghegan wrote:
> As I said, I think that this is actually an old bug. After all,
> _bt_killitems is required to test so->currPos.lsn against the same
> page's current LSN if the page pin was dropped since _bt_readpage was
> first called
On Mon, Jun 9, 2025 at 10:47 AM Peter Geoghegan wrote:
> I've always thought that the whole idiom of testing
> BTScanPosIsPinned() in a bunch of places was very brittle. I wonder if
> it makes sense to totally replace those calls/tests with similar tests
> of the new so->
This is also why we do things like allocate so->currTuples within
btrescan. We don't yet know if the scan will be an index-only scan
when btbeginscan is called.
--
Peter Geoghegan
've always thought that the whole idiom of testing
BTScanPosIsPinned() in a bunch of places was very brittle. I wonder if
it makes sense to totally replace those calls/tests with similar tests
of the new so->dropPin field.
--
Peter Geoghegan
v1-0001-Fix-issue-with-mark-restore-nbtree-pins.patch
Description: Binary data
On Fri, Jun 6, 2025 at 6:58 PM Masahiko Sawada wrote:
> Agreed. Given the above test results, it's unlikely always sorting the
> array helps speedups.
Did you try specializing the sort? In my experience, it makes a big difference.
--
Peter Geoghegan
On Wed, Jun 4, 2025 at 1:39 PM Peter Geoghegan wrote:
> My current plan is to commit this in the next couple of days.
Pushed.
It would be great if we could also teach BufferGetLSNAtomic() to just
call PageGetLSN() (at least on PG_HAVE_8BYTE_SINGLE_COPY_ATOMICITY
platforms), but my sense is t
On Wed, Jun 4, 2025 at 10:21 AM Peter Geoghegan wrote:
> On Wed, Jun 4, 2025 at 7:33 AM Tomas Vondra wrote:
> > So better to get this in now, otherwise we may have to wait until PG19,
> > because of ABI (the patch adds a field into BTScanPosData, but maybe
> > it'd
. There's a kind of symmetry to how
things work with the patch in place, which IMV makes things simpler.
--
Peter Geoghegan
On Tue, Jun 3, 2025 at 1:59 PM Peter Geoghegan wrote:
> We do currently end up using OldestXmin-1 as our
> snapshotConflictHorizon when this happens, but as I said in the other
> email, I don't think that that's really required.
Actually, that's not really true, either.
our
snapshotConflictHorizon when this happens, but as I said in the other
email, I don't think that that's really required.
--
Peter Geoghegan
lly planning (originally) to write a patch to try
> and change the horizon to make it older in more cases when it's
> correct. I'm trying to figure out the most straightforward code to
> calculate the combined snapshot conflict horizon for a prune/freeze/vm
> update record.
I figured that that was what this was really about.
--
Peter Geoghegan
the oldest safe value in all cases. How much that actually buys you
seems much less clear.
--
Peter Geoghegan
he same place, it might make
sense to do it just to make things more consistent.
--
Peter Geoghegan
in cases where we still give up and just use
OldestXmin (cases where we just use "OldestXmin - 1", I should say).
I'm not sure how much this matters in practice; I imagine that using
prunestate->visibility_cutoff_xid is very much the common case with
page-level freezing.
--
Peter Geoghegan
On Mon, Jun 2, 2025 at 10:27 AM Melanie Plageman
wrote:
> Attached what I plan to push shortly.
Looks good to me.
--
Peter Geoghegan
o that
it took place next to the initialization of other, similar BlockNumber
fields from LVRelState. IIRC I wrote a comment about this issue at
least in part because I understood the temptation to do that.
> I'll push the fix tomorrow.
Cool.
--
Peter Geoghegan
ning/unfrozen XID value from one of these unscanned heap
pages.
--
Peter Geoghegan
;break" here, it doesn't matter what visibility_cutoff_xid has
been set to. It cannot be used for any purpose.
--
Peter Geoghegan
On Fri, May 30, 2025 at 5:16 PM Melanie Plageman
wrote:
> On Thu, May 29, 2025 at 11:37 AM Peter Geoghegan wrote:
> > How can taking the "Avoids false conflicts when hot_standby_feedback
> > in use" path more often result in fewer unnecessary conflicts on
> > s
idTransactionId;
}
else
{
/* Avoids false conflicts when hot_standby_feedback in use */
snapshotConflictHorizon = vacrel->cutoffs.OldestXmin;
TransactionIdRetreat(snapshotConflictHorizon);
}
How can taking the "Avoids false conflicts when hot_standby_feedback
in use" path m
On Fri, May 23, 2025 at 11:42 AM Andres Freund wrote:
> > I'm envisioning this patch series as v19 work, were you
> > thinking we should be more aggressive?
>
> Mostly agreed - but I am wondering if the AV fix should be backpatched?
I think that it probably should be.
--
Peter Geoghegan
g., when there one or more inequalities on the first column, plus a
"=" condition on the second column). I can live with this wording,
though.
Thanks
--
Peter Geoghegan
the release notes point
out that skip scan is only helpful when the leading/skipped column is
low cardinality (though that detail is accurate).
--
Peter Geoghegan
y needed to make kill_prior_tuple LP_DEAD
bit setting safe when the page pin is dropped. But it still seems
related to those GiST + SP-GiST IOS bugs, since we also need to
consider when dropping the pin is categorically unsafe (only nbtree
gets that aspect right currently).
--
Peter Geoghegan
t allocating a new multi
is/might be required.
--
Peter Geoghegan
Note on testing the behavior in this area: you may have to coax the
planner into picking an index scan to show this behavior -- it'll
generally prefer to use a Result node with "One-Time Filter: false"
instead. I like to do this by using a prepared statement, while
setting plan_cache_mode = 'force_generic_plan'.
--
Peter Geoghegan
On Mon, May 12, 2025 at 8:58 AM Peter Geoghegan wrote:
> I wonder if we can fix this problem by getting rid of the old support
> routine #5, "options". It currently doesn't do anything, and I always
> thought it was strange that it was added "for uniformity"
On Tue, May 20, 2025 at 8:43 AM Peter Geoghegan wrote:
> I imagine bitmap index scans will be similar to plain index scans.
To be clear, I meant that the magnitude of the problem will be
similar. I do not mean that they aren't fixed by my v2. They should be
fully fixed by v2.
-
t; for when called.
In other words, plain index scans that read a lot of tuples might
receive no benefit whatsoever. It's possible that it already matters
less there anyway. It's also possible that there is some unforeseen
benefit from merely *delaying* the call to BufferGetLSNAtomic. B
ressions tied to the introduction of a new
skip support support function to nbtree). Fully makes sense that my v2
would fix that particular problem, even with plain index scans.
But...what about slightly different queries, that actually do return
some rows? Those will look different.
--
Peter Geoghegan
avoid calling
BufferGetLSNAtomic() unless and until its return value might actually
be useful. It's quite a lot easier to understand the true purpose of
so->currPos.lsn with this new structure.
--
Peter Geoghegan
On Mon, May 19, 2025 at 2:19 PM Peter Geoghegan wrote:
> On Mon, May 19, 2025 at 2:01 PM Tomas Vondra wrote:
> > The regular index scan however still have this issue, although it's not
> > as visible as for IOS.
>
> We can do somewhat better with plain index scans than
eturns "true" actually takes place (or
until the whole scan ends).
--
Peter Geoghegan
On Mon, May 19, 2025 at 12:42 PM Peter Geoghegan wrote:
> We don't actually need to call BufferGetLSNAtomic() from _bt_readpage
> during index-only scans (nor during bitmap index scans). We can just
> not call BufferGetLSNAtomic() at all (except during plain index
> scans),
), and so obviously have no possible
use for so->currPos.lsn, either).
--
Peter Geoghegan
000 and k = 1;
QUERY PLAN
Index Only Scan using t_i_j_k_idx on t (actual rows=1 loops=1)
Index Cond: ((i >= 1) AND (i <= 50) AND (k = 1))
Heap Fetches: 0
Buffers: shared hit=1919
Planning:
Buffers: shared hit=4
Planning Tim
til somebody
needs to add another support routine to nbtree; why delay dealing with
the problem that you've highlighted? Right now I don't really have an
opinion on how best to address the problem.
--
Peter Geoghegan
t alignment padding space).
AFAICT nothing that seems like it might be relevant changed, apart
from the addition of the new support routine, which was ruled out
already.
--
Peter Geoghegan
On Fri, May 9, 2025 at 12:29 PM Tomas Vondra wrote:
> Tried, doesn't seem to affect the results at all.
OK, then. I don't think that we're going to figure it out this side of
pgConf.dev. I'm already well behind on talk preparation.
--
Peter Geoghegan
mous --
that's *very* surprising to me. btbeginscan and btrescan go from being
microscopic to being very prominent. But skip scan simply didn't touch
either function, at all, directly or indirectly. And neither function
has really changed in any significant way in recent years. So right
now I'm completely stumped.
--
Peter Geoghegan
r
overhead) in the case of the "bid = 1" variant query. There's no
reason to expect the absolute number of cycles added by some
hypothetical regression in preprocessing to vary among these two
variants of your count(*) query.
--
Peter Geoghegan
85 leaf pages only applies when
partitioning isn't in use. When it is in use, each individual
partition's index has only one index leaf page. So each individual
index scan is indeed fairly inexpensive, particularly relative to
startup cost/preprocessing cost.
--
Peter Geoghegan
, and 2.) that it involves the use of partitioning.
I made sure to test pgbench SELECT quite thoroughly -- that certainly
wasn't regressed. So it seems very likely to have something to do with
partitioning.
--
Peter Geoghegan
On Fri, May 9, 2025 at 11:55 AM Peter Geoghegan wrote:
> Note that "sizeof(FmgrInfo)" is 48 bytes. Prior to skip scan,
> RelationData.rd_supportinfo would have required 48*5=240 bytes. After
> skip scan, it would have required 48*6=288 bytes. Maybe 256 bytes is
> some kind
ytes. Maybe 256 bytes is
some kind of critical threshold, someplace?
--
Peter Geoghegan
On Fri, May 9, 2025 at 9:59 AM Peter Geoghegan wrote:
> I don't actually think that this kind of scan would have been affected
> by those known regressions -- since they don't use array keys. But it
> is definitely true that the queries that you're looking at very much
&g
;buffer" in glibc, to reduce the amount of malloc
> syscalls. And with that, the results change to this:
You're sure that the problem is an increase in the number of
malloc()s? If that's what this is, then it shouldn't be too hard to
debug.
--
Peter Geoghegan
On Fri, May 9, 2025 at 9:33 AM Peter Geoghegan wrote:
> Can you try it again, with prepared statements? Alternatively, you
> could selectively revert the changes that commit 92fe23d93aa made to
> utils/adt/selfuncs.c, and then retest.
Oh, wait, you already did try it with prepared s
On Fri, May 9, 2025 at 9:42 AM Peter Geoghegan wrote:
> I'm rather puzzled as to why this happens, then. I expect that nbtree
> preprocessing will be able to use its usual single index column/index
> key fast path here -- the "We can short-circuit most of the work if
> ther
ut perhaps I overlooked some
subtlety.
Can you try it again, with prepared statements? Alternatively, you
could selectively revert the changes that commit 92fe23d93aa made to
utils/adt/selfuncs.c, and then retest.
--
Peter Geoghegan
; previously.
Cool.
Pushed both patches from v2 just now.
Thanks
--
Peter Geoghegan
On Fri, May 2, 2025 at 3:04 PM Peter Geoghegan wrote:
> I would like to commit the first patch later today, ahead of shipping
> beta1. But the second patch won't make it into beta1.
Committed the first patch last Friday.
Attached is v2, whose 0002- bugfix patch is essentially unchang
ifferent feature. Many hackers have
made the same mistake in the past. ISTM that specifically describing
how the feature applies to queries that omit an "=" condition makes
this misunderstanding less likely to occur.
--
Peter Geoghegan
On Fri, May 2, 2025 at 2:22 PM Peter Geoghegan wrote:
> A slight variant of my fuzzing Python script did in fact go on to
> detect a couple of bugs.
>
> I'm attaching a compressed SQL file with repros for 2 different bugs.
> The first bug was independently detected by
account for the case where we
completely end the scan during preprocessing due to it having an
unsatisfiable array qual.
Pushed a fix for this just now.
Thanks
--
Peter Geoghegan
On Tue, Apr 1, 2025 at 7:02 PM Peter Geoghegan wrote:
> Though I think it should be "" safe even when "key->sk_attno >
> firstchangingattnum" "", to highlight that the rule here is
> significantly more permissive than even the nearby range ski
affected by this the most are index-only scans on
> all-visible tables that fit into shared buffers, with
> correlated/sequential pattern. Or even regular index scans with all data
> in shred buffers.
My hope is that the index scan manager can be taught to back off when
this is happening, to avoid the regressions. Or that it can avoid them
by only gradually ramping up the prefetching. Does that sound
plausible to you?
--
Peter Geoghegan
results, correct results).
Cool!
> Compared to the last patch version [1] shared on list (in November),
> there's a number of significant design changes - a lot of this is based
> on a number of off-list discussions I had with Peter Geoghegan, which
> was very helpful.
Thanks for
On Fri, Apr 11, 2025 at 4:26 PM Jonathan S. Katz wrote:
> Please join us in wishing Jacob much success and few reverts!
Well done, Jacob.
--
Peter Geoghegan
On Tue, Apr 8, 2025 at 11:20 AM Nathan Bossart wrote:
> +1 for UTC.
+1, I think that AoE is needlessly obscure
--
Peter Geoghegan
e wrong idea
about the index. I think that it's much less important to help users
to understand exactly how well a skip scan performs, relative to some
theoretical ideal. The theoretical ideal is just too complicated.
> I'm not sure that I correctly understood about "separation optimizer index
> path/executor stage". Do you mean that it's better to do all the
> optimizations during index execution rather than during index execution?
Yes. In general it's good if we can delay decisions about the scan's
behavior until runtime, when we have a more complete picture of what
the index actually looks like.
--
Peter Geoghegan
nless a mark is
> done.
That's exactly what I had in mind.
--
Peter Geoghegan
axTIDsPerBTreePage * sizeof(BTScanPosItem) array once per scan matters when
> that many tuples are returned.
I'm not arguing for or against anything. I'm just giving context.
--
Peter Geoghegan
ed to leak memory. Same with
all operators. This will be far from the only time that we expect
opclass authors to get that right. This mostly works just fine,
probably because an opclass that leaked memory like this would visibly
break quite quickly.
> You could Assert(inc_sk_argument == (Datum) 0) in the oflow branch?
> I'm certain that (Datum) 0 is an invalid representation of a pointer,
> so we know that no allocated value is returned (be it new or
> pre-existing).
I just don't see what the point would be. Nothing would stop a
decrement/increment callback that needs to allocate memory from
returning 0 and then leaking memory anyway.
--
Peter Geoghegan
TIDs
from a page, though. Any low cardinality index will tend to have
almost that many TIDs to return on any page that only stores
duplicates. And scan will necessarily have to return all of the TIDs
from such a page, if it has to return any.
--
Peter Geoghegan
On Fri, Apr 4, 2025 at 12:00 PM Tom Lane wrote:
> Peter Geoghegan writes:
> > Is somebody going to commit this soon? Alexander?
>
> Done.
Thanks!
> If it's still not stable I think the next step is to nuke both
> test queries, since I remain of the opinion that t
On Fri, Apr 4, 2025 at 11:19 AM Alena Rybakina
wrote:
> I fixed it - changed the tables and didn't use system tables.
Is somebody going to commit this soon? Alexander?
--
Peter Geoghegan
On Wed, Apr 2, 2025 at 12:43 PM Álvaro Herrera wrote:
> I'm just saying that this is the reason to have the field be last in the
> struct.
Right. And I'm just saying that I don't think that it would take very
much effort to change it. That aspect isn't performance critical.
--
Peter Geoghegan
he common case where there weren't too many tuples to
return from the page.
The way that we allocate BLCKSZ twice for index-only scans (one for
so->currTuples, the other for so->markTuples) is also pretty
inefficient. Especially because any kind of use of mark and restore is
exceedingly rare.
--
Peter Geoghegan
In
practice, merge joins tend to hardly ever do that (I know this because
it makes testing annoyingly difficult). In other words, the
optimization added by commit 08ae5edc5c seems to be very effective in
practice.
--
Peter Geoghegan
thin
_bt_set_startikey itself this way, which doesn't add up to much.
_bt_set_startikey is only called once per page.
Besides, in general it's fairly likely that a range skip array that
_bt_set_startikey sees won't be against a column that we already know
(from the _bt_keep_natts_fa
ly address the opclass
author's responsibilities around NULLs? It specifically says that it's
not up to the opclass author to think about them at all.
> The current system is quite easy to build a leaking
> implementation for. Sure, there are other easy ways to break this, but
> I think it's an easy API modification that makes things just that bit
> safer.
How can I do that? The callbacks themselves (the callbacks for
functions that are called as the scan progresses) don't use the fmgr
interface.
They're simple C function pointers (rather like sort support
callbacks), and do not have any args related to NULLs. They accept a
raw Datum, which can never be a NULL. The convention followed by
similar functions that are passed a Datum that might just be a NULL is
for the function to also accept a separate "bool isnull" argument.
(Just not having such a separate bool arg is another existing
convention, and the one that I've followed here.)
> A renewed review for 0002+ will arrive at a later point.
Thanks for the review!
--
Peter Geoghegan
nce I uploaded a complete HTML code coverage report
> (~36 Mb) [1].
Thanks. I actually generated a similar coverage report myself
yesterday. I'm keeping an eye on this, but I don't think that it's
worth aiming for very high test coverage for these code paths. Writing
a failing test case for that one ISNULL _bt_advance_array_keys code
path was quite difficult, and required quite a big index. That isn't
going to be acceptable within the standard regression tests.
--
Peter Geoghegan
of the big patch file, though. Maybe you're just not used to seeing
tests appear last like this?
I use "git config diff.orderfile ... " to get this behavior. I find it
useful to put the important changes (particularly header file changes)
first, and less important changes (like tests) much later.
Thanks for taking a look at my patch!
--
Peter Geoghegan
f a missed opportunity. It doesn't guarantee
that the scan can end in the case of a SAOP (the very next leaf page
could satisfy the same scan key, given a SAOP array with "gaps" in the
elements). So it can only really end the scan with a scalar = key --
though never when it is preceded by a skip array (doesn't matter if
the skip array is definitely satisfied/has only one distinct attribute
value on the page). Is this idea related to your previous idea
involving "special-casing firstchangingattnum=1"?
If I was going to do something like this, I think that it'd work by
backing out of applying the optimization entirely. Right now, 0003-*
applies the optimization whenever _bt_readpage decides to call
_bt_skip_ikeyprefix, regardless of the details after that (it would be
easy to teach _bt_skip_ikeyprefix to decide against applying its
optimization entirely, but testing seems to show that it always makes
sense to go ahead when _bt_skip_ikeyprefix is called, regardless of
what _bt_skip_ikeyprefix sees on the page).
> Thank you for working on this.
Thanks for the review!
--
Peter Geoghegan
than it's worth.
The duplication isn't quite duplication, so much as it's two functions
that are mirror images of each other. The details/direction of things
is flipped in a number of places. The strategy number differs, even
though the same function is called.
--
Peter Geoghegan
he patch, or to nag
the patch author).
It probably makes sense to make stale/in limbo entries stick out like
a sore thumb. They're *supposed* to be annoying.
--
Peter Geoghegan
e that shows certain "Personal Dashboard" entries
are "in limbo". But please don't make entries in this state just
vanish, on the grounds that they're theoretically closed -- experience
suggests that they're probably not really closed at all.
If something appears on my "Personal Dashboard" it appears there
because I actively chose to participate, and hiding things from me on
bureaucratic grounds seems paternalistic.
--
Peter Geoghegan
On Wed, Mar 19, 2025 at 5:08 PM Peter Geoghegan wrote:
> The tricky logic added by 0003-* is my single biggest outstanding
> concern about the patch series. Particularly its potential to confuse
> the existing invariants for required arrays. And particularly in light
> of the changes
ling code (in
execParallel.c) less consistent. I really don't think that that
alternative is any better to what I actually did (I actually
considered doing things this way myself at one point), though it also
doesn't seem worse. So I'm inclined to do nothing about it now.
--
Peter Geoghegan
ent next to the new assertions recently added to
ExecIndexScanReInitializeDSM and ExecIndexOnlyScanReInitializeDSM be
an improvement? And if so, what would the comment say?
--
Peter Geoghegan
;s commit), or in ExecEndBitmapHeapScan (added by the similar
bitmap heap instrumentation patch discussed on that other thread,
which became commit 5a1e6df3).
--
Peter Geoghegan
On Wed, Mar 5, 2025 at 9:37 AM Peter Geoghegan wrote:
> Committed just now. Thanks again.
I had to revert this for now, due to issues with debug_parallel_query.
Apologies for the inconvenience.
The immediate problem is that when the parallel leader doesn't
participate, there is
On Sun, Mar 9, 2025 at 6:23 PM Tom Lane wrote:
> Ah. Most likely somebody dismissed it years ago. Given that
> precedent, I'm content to dismiss this one too.
It is dead code, unless somebody decides to #define
DISABLE_LEADER_PARTICIPATION to debug a problem.
--
Peter Geoghegan
On Fri, Mar 7, 2025 at 12:18 PM Peter Geoghegan wrote:
> What do you think of the attached WIP patch, which does things this
> way? Does this seem like the right general direction to you?
Attached is a more refined version of this patch, which is
substantially the same the same as the ver
I try this out with a debugger, even the B-Tree scan
doesn't have doesn't even have IndexScanDescData.parallel_scan set. It
isn't actually a parallel B-Tree scan. It is a
serial/non-parallel-aware index scan that is run from a parallel
worker, and feeds its output into a gather merge node despite all
this.
--
Peter Geoghegan
s the same approach already used
for things like sort nodes and hash join nodes.
Obviously, this revised version of the patch passes all tests when the
tests are run with debug_parallel_query=regress.
--
Peter Geoghegan
v1oftake2-0001-Show-index-search-count-in-EXPLAIN-ANALYZE.patch
Description: Binary data
-- that detail is almost inevitable. The main
problem is that it failed to teach
nodeIndexscan.c/nodeIndexonlyscan.c/nodeBitmapIndexscan.c to place the
IndexScanDescData.nsearches counter somewhere where explain.c could
later get to reliably. That'd probably be easier if
IndexScanDescDa
On Wed, Mar 5, 2025 at 1:34 PM Andres Freund wrote:
> Pushed both patches.
Thanks!
--
Peter Geoghegan
em?
That would certainly be nice.
--
Peter Geoghegan
Committed just now. Thanks again.
On Mon, Mar 3, 2025 at 4:01 PM Robert Haas wrote:
> On Thu, Feb 27, 2025 at 7:58 PM Peter Geoghegan wrote:
> > It's easy to produce an example that makes intuitive sense. For
> > example, with skip scan that has a qual such as "WHERE a
These suggestions are made based on some fairly optimistic
assumptions about how hard this would be to put into action. These are
just nice-to-haves for me, so if it was a great deal of work then it
seems hard to justify.
--
Peter Geoghegan
Or can I do it?
--
Peter Geoghegan
e state
that tracks the number of index searches (IndexScanDesc, BTScanOpaque,
or a global counter)."
This single line summary was spot on. The summary of my nbtree skip
scan patch series was also quite good.
I'm not sure if I'll ever actually use this tool day to day, or even
week to week. But I just might. In any case I'm glad that somebody is
experimenting with it.
--
Peter Geoghegan
is just prominent enough to prevent that, but
not so prominent as to cause annoyance (that's the idea, at least).
If I later decide that I actually do want to forget about a patch
forever, then it should be equally easy to unlike/unfollow a patch.
ISTM that there is value in a workflow that makes that into an active
choice.
--
Peter Geoghegan
On Tue, Mar 4, 2025 at 12:02 PM Jelte Fennema-Nio wrote:
>
> On Tue, 4 Mar 2025 at 17:31, Peter Geoghegan wrote:
> > > Peter Geoghegan suggested adding a "dashboard" of this
> > > kind. Feedback on this is very welcome, but depending on the
> > > comple
y. If you're not logged in it will show you the current
> commitfest. See screenshot for an example. The old list of all
> commitfests is moved to the /archive (which has a button on the
> homepage).
This looks very much like what I had in mind. Thanks!
> Peter Geoghegan sugge
I'd ask before proceeding
with review and commit.
--
Peter Geoghegan
On Thu, Feb 27, 2025 at 7:57 PM Peter Geoghegan wrote:
> On Thu, Feb 27, 2025 at 3:42 PM Robert Haas wrote:
> > Good documentation could help.
Attached revision adds an example that shows how "Index Searches: N"
can vary. This appears in "14.1.2. EXPLAIN ANALYZE&
gt; an Index Only Scan, even when the index key and sort key are the same.
> For example:
>
> --- src/test/modules/xtree/expected/interval.out
> +++ src/test/modules/xtree/results/interval.out
What's the xtree module? There's no such test module?
--
Peter Geoghegan
1 - 100 of 1776 matches
Mail list logo