Hubert Depesz Lubaczewski"
"Paul Jungwirth", vs "Paul A. Jungwirth"
Sidenote: Paul Amondson is Amonson, Paul D on this list, and is not a
duplicate entry for Paul A. Jungwirth as might be suspected based on
initials.
Kind regards,
Matthias van de Meent
ple_data_split, however, uses the regclass to decode the contents of
the tuple, and can thus determine with certainty based on that
regclass that it was supplied incorrect (non-heapAM table's regclass)
arguments. It therefore has enough context to bail out and stop trying
to decode the page's tuple data.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
p to) 19% of our
current allocation. I'm not sure if these tricks would benefit with
performance or even be a demerit, apart from smaller structs usually
being better at fitting better in CPU caches.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
[^1] NLOCKENTS() benefits from be
emp blocks were already tracked
(and displayed with the BUFFERS option) when track_io_timing was
enabled: temp timing was introduced with efb0ef90 in early April 2022,
and the output of IO timings for shared blocks has existed since the
introduction of track_io_timing in 40b9b957 back in late March of
2012.
Kind regards,
Matthias van de Meent
t work on sequences think it's better
to also support inspection of sequences, then I think that's a good
reason to add that support where it doesn't already exist.
As for patch v3, that seems fine with me.
Matthias van de Meent
Neon (https://neon.tech)
ss to move the state forward. I found this patch
helpful while working on solving this issue, even if it wouldn't have
found the bug as reported.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
v1-0001-Fix-stuck-parallel-btree-scans.patch
Description: Binary data
v1-0002-nbtree-add-tracking-of-processing-responsibilitie.patch
Description: Binary data
On Tue, 2 Jul 2024 at 02:23, David Rowley wrote:
>
> On Mon, 1 Jul 2024 at 23:42, Matthias van de Meent
> wrote:
> >
> > On Mon, 1 Jul 2024 at 12:49, David Rowley wrote:
> > >
> > > On Mon, 1 Jul 2024 at 22:07, Matthias van de Meent
> > > wrote
On Wed, 17 Jul 2024 at 05:29, Andrei Lepikhov wrote:
>
> On 5/8/24 17:13, Matthias van de Meent wrote:
> > As you may know, aggregates like SELECT MIN(unique1) FROM tenk1; are
> > rewritten as SELECT unique1 FROM tenk1 ORDER BY unique1 USING < LIMIT
> > 1; by using t
On Wed, 17 Jul 2024 at 16:09, Andrei Lepikhov wrote:
>
> On 17/7/2024 16:33, Matthias van de Meent wrote:
> > On Wed, 17 Jul 2024 at 05:29, Andrei Lepikhov wrote:
> >> Thanks for the job! I guess you could be more brave and push down also
> >> FILTER stateme
On Fri, 22 Mar 2024, 01:29 Michał Kłeczek, wrote:
> On 21 Mar 2024, at 23:42, Matthias van de Meent
> wrote:
>> On Tue, 19 Mar 2024 at 17:00, Michał Kłeczek wrote:
>>> With this operator we can write our queries like:
>>>
>>> account_number ||= [list of
d, so updating this all to be
consistently BlockNumber across the API seems like a straigthforward
patch.
cc-ed Thomas as committer of the PG17 smgr API changes.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
On Tue, 30 Jul 2024 at 14:32, Thomas Munro wrote:
>
> On Tue, Jul 30, 2024 at 11:24 PM Matthias van de Meent
> wrote:
> > While working on rebasing the patches of Neon's fork onto the
> > REL_17_STABLE branch, I noticed that the nblocks arguments of various
> >
Hi,
On Thu, 1 Aug 2024 at 18:44, Andres Freund wrote:
> On 2024-08-01 12:45:16 +0200, Matthias van de Meent wrote:
> > Here's one that covers both master and the v17 backbranch.
>
> FWIW, I find it quite ugly to use BlockNumber to indicate the number of blocks
> to be wri
ible
for providing the initial sorted runs does resonate with me, and can
also be worth pursuing.
I think it would indeed save time otherwise spent comparing if tuples
can be merged before they're first spilled to disk, when we already
have knowledge about which tuples are a sorted run. Afterwards, only
the phases where we merge sorted runs from disk would require my
buffered write approach that merges Gin tuples.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
On Fri, 1 Mar 2024 at 14:48, Matthias van de Meent
wrote:
> Attached is version 15 of this patch, with the above issues fixed.
> It's also rebased on top of 655dc310 of this morning, so that should
> keep good for some time again.
Attached is version 16 now. Relevant changes from
ackward scan code in _bt_readpage to have an
approximately equivalent handling as the forward scan case for
end-of-scan cases, which is an improvement IMO.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech/)
rinsert can probably be improved significantly if
we use things like atomic operations on STIR pages. We'd need an
exclusive lock only for page initialization, while share locks are
enough if the page's data is modified without WAL. That should improve
concurrent insert performance signifi
0/";
directive in the new thread's initial mail's text. This would give the
benefits of requiring no second mail for CF referencing purposes, be it
automated or manual.
Alternatively, we could allow threads for new entries to be started through
the CF app (which would automatically inser
scans' metric that differs from the loop count), so
maybe this would better be implemented in that framework?
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
[0]
https://www.postgresql.org/message-id/flat/TYWPR01MB10982D24AFA7CDC273445BFF0B1DC2%40TYWPR01MB10982.jpnprd01.prod.outlook.com#9c64cf75179da8d657a5eab7c75be480
On Thu, 15 Aug 2024 at 23:10, Peter Geoghegan wrote:
>
> On Thu, Aug 15, 2024 at 4:34 PM Matthias van de Meent
> wrote:
> > > Attached patch has EXPLAIN ANALYZE display the total number of
> > > primitive index scans for all 3 kinds of index scan node. This is
> &
iling list
archives [0] seem to fail at a 503 produced by Varnish Cache server:
Error 503 Backend fetch failed. Maybe more of infra is down, or
otherwise under maintenance?
Kind regards,
Matthias van de Meent
[0] https://www.postgresql.org/list/pgsql-hackers/2024-08/
On Sun, 11 Aug 2024 at 21:44, Peter Geoghegan wrote:
>
> On Tue, Aug 6, 2024 at 6:31 PM Matthias van de Meent
> wrote:
> > +1, LGTM.
> >
> > This changes the backward scan code in _bt_readpage to have an
> > approximately equivalent handling as the forward sc
t was,
limiting the scope of the changes to only the named functions.
Finally, this could be a great start on prefix truncation for btree
indexes, though that is _not_ the goal of this patch. This patch skips, but
does not truncate, the common prefix.
Kind regards,
Matthias van de Meent
P.S. One m
ched as 'normal' deduplication.
Thanks,
Matthias van de Meent
[0] https://archive.org/stream/symmetricconcurr00lani
patch which properly sets these values at the appropriate places.
Any thoughts?
Matthias van de Meent
From f41da096b1f36118917fe345e2a6fc89530a40c9 Mon Sep 17 00:00:00 2001
From: Matthias van de Meent
Date: Thu, 24 Sep 2020 20:41:10 +0200
Subject: [PATCH] Report the active index for reindex
On Fri, 25 Sep 2020 at 08:44, Michael Paquier wrote:
>
> On Thu, Sep 24, 2020 at 09:19:18PM +0200, Matthias van de Meent wrote:
> > While working on a PG12-instance I noticed that the progress reporting of
> > concurrent index creation for non-index relations fails to up
max_parallel_workers_per_gather wasn't
noticed during that development.
PFA a trivial one-line patch that makes that a bit more consistent.
Kind regards,
Matthias van de Meent
v1-0001-Use-MAX_PARALLEL_WORKER_LIMIT-consistently.patch
Description: Binary data
e the number of buffers we can fail to start
a new primitive scan by locally keeping track of the number of pages
we've processed without starting an expected primitive scan, but I
feel that it'd result in overall worse performance if we tried to wait
for more time than just doing the skip che
allocation of partialxlogfname in
this code. It could well do without, by "just" reusing the xlogfname
scratch space when we fail to recover the full segment.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
On Thu, 17 Oct 2024 at 00:33, Peter Geoghegan wrote:
>
> On Wed, Oct 16, 2024 at 5:48 PM Matthias van de Meent
> wrote:
> > In v17 and the master branch you'll note 16 buffer hits for the test
> > query. However, when we use more expensive btree compare operations
>
On Wed, 16 Oct 2024 at 20:52, Peter Geoghegan wrote:
>
> On Fri, Oct 11, 2024 at 10:27 AM Matthias van de Meent
> wrote:
> > With the introduction of the new SAOP handling in PG17, however, the
> > shared state has become a bit more muddied. Because the default has
&
Page
must happen before _bt_p_release, so that users don't get conflicts
caused only by bad timing issues in single-directional index scans.
Apart from these two comments, LGTM.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
_ids to catch
up to those LSNs.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
There is
no SQL standard-prescribed COMMENT command (if our current docs are to
be believed, I don't have a recent version of ISO 9075 to verify that
claim).
Maybe: "Do not dump database object comments", or "Do not dump COMMENT
ON ... -commands"?
Kind regards,
Matthias van de Meent
Neon (https://neon.tech/)
is patch generates
required skip arrays for all attributes that don't yet have an
equality key and which are ahead of any (in)equality keys, except the
case with row compare keys which I already commented on above.
> utils/skipsupport.[ch]
I'm not sure why this is included in utils
e capture.
PS. I have other complaints about timestamp-based
replication/snapshots, but unless someone thinks otherwise and/or it
is made relevant I'll consider that off-topic.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
On Mon, 4 Nov 2024 at 21:00, Bruce Momjian wrote:
>
> On Mon, Nov 4, 2024 at 07:49:45PM +0100, Daniel Gustafsson wrote:
> > > On 4 Nov 2024, at 17:24, Erik Wienhold wrote:
> > > But I also think that
> > > "SQL" in front of the command name is unnecessary because the man page
> > > uses the "FOO
tly
1ms/3ms full round trip latency:
1 page/1 ms * 8kB/page * 256 concurrency = 256 pages/ms * 8kB/page =
2MiB/ms ~= 2GiB/sec.
for 3ms divide by 3 -> ~666MiB/sec.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
On Thu, 28 Nov 2024 at 19:57, Tom Lane wrote:
>
> Matthias van de Meent writes:
> > On Thu, 28 Nov 2024 at 18:19, Robert Haas wrote:
> >> [...] It's unclear to me why
> >> operating systems don't offer better primitives for this sort of thing
> >&g
be placed (some
conditions apply, based on flags and specific API used), so, assuming
we have some control on where to allocate memory, we should be able to
reserve enough memory by using these APIs.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
On Wed, 4 Sept 2024 at 17:32, Tomas Vondra wrote:
>
> On 9/4/24 16:25, Matthias van de Meent wrote:
> > On Tue, 3 Sept 2024 at 18:20, Tomas Vondra wrote:
> >> FWIW the actual cost is somewhat higher, because we seem to need ~400B
> >> for every lock (not jus
unique
indexes could be attached to primary keys without taking O(>=
tablesize) of effort.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
want to have lost that data.
Same applies for ~scans~ searches: If we do an index search, we should
show it in the count as total sum, not partial processed value. If a
user is interested in per-loopcount values, then they can derive that
value from the data they're presented with; but that isn't true when
we present only the divided-and-rounded value.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
dfuncs.c, and/or the usage of
pg_node_tree for (among others) views, index/default expressions,
constraints, and partition bounds would maybe be useful as well.
> None of this is a substitute for installing some kind of ABI-checking
> infrastructure;
Agreed.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
at
least, most) active backbranches older than PG17's.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
PS.
I don't think the optimization itself is completely impossible, and we
can probably re-enable an optimization like that if (or when) we find
a way to reliably keep
On Thu, 28 Nov 2024 at 22:09, Alena Rybakina wrote:
>
> Hi!
>
> On 27.11.2024 16:36, Matthias van de Meent wrote:
>> On Wed, 27 Nov 2024 at 14:22, Alena Rybakina
>> wrote:
>>> Sorry it took me so long to answer, I had some minor health complications
>>>
the table to commit (thus all
bitmaps would be dropped), similar to REINDEX CONCURRENTLY's wait
phase, but that would slow down vacuum's ability to clean up old data
significantly, and change overall vacuum behaviour in a fundamental
way. I'm quite opposed to such a change.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
On Mon, 2 Dec 2024 at 17:31, Andres Freund wrote:
>
> Hi,
>
> On 2024-12-02 16:08:02 +0100, Matthias van de Meent wrote:
> > Concurrency timeline:
> >
> > Session 1. amgetbitmap() gets snapshot of index contents, containing
> > references to dead tuple
That said, I don't think it'd be safe to use with repalloc, as that
would likely truncate the artificial hole in the memory chunk,
probably requiring restoration work by the callee on the prefixed
arrays. That may be a limitation we can live with, but I haven't
checked to see if there are any usages of repalloc() on TupleDesc.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
On Sat, 4 Jan 2025 at 02:00, Matthias van de Meent
wrote:
>
> On Tue, 3 Dec 2024 at 17:21, Peter Geoghegan wrote:
> >
> > On Mon, Dec 2, 2024 at 8:18 PM Peter Geoghegan wrote:
> > > Attached is a refined version of a test case I posted earlier on [2],
> > >
cy on the btree opclasses for
indexable types. This can cause "bad" ordering, or failure to build
the index when the parallel path is chosen and no default btree
opclass is defined for the type. I think it'd be better if we allowed
users to specify which sortsupport function to use, or at least use
the correct compare function when it's defined on the attribute's
operator class.
> include/access/gin_tuple.h
> +OffsetNumber attrnum;/* attnum of index key */
I think this would best be AttrNumber-typed? Looks like I didn't
notice or fix that in 0009.
> My plan is to eventually commit the first couple patches, possibly up
> 0007 or even 0009.
Sounds good. I'll see if I have some time to do some cleanup on my
patches (0008 and 0009), as they need some better polish on the
comments and commit messages.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
which the
isolation tester doesn't (can't?) detect. With only 0001, the new test
fails with incorrect results, with 0002 applied the test succeeds.
I'm looking forward to any feedback.
Kind regards,
Matthias van de Meent
v3-0001-isolationtester-showing-broken-index-only-scans-w.patch
Description: Binary data
v3-0002-RFC-Extend-buffer-pin-lifetime-for-GIST-IOS.patch
Description: Binary data
oads
quickly regress to always extending the table as no cleanup can
happen, while patched they'd have much more leeway due to page
pruning. Presumably a table with a fillfactor <100 would show the best
results.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
rocess_keys
+ * would've marked the qual as unsatisfyable, preventing us from
+ * ever getting this far.
Apart from that minor issue, LGTM.
Kind regards,
Matthias van de Meent
nd thus get CFBot to succeed again.
The patches for the back-branches didn't need updating, as those
branches have not diverged enough for those patches to have gotten
stale. They're still available in my initial mail over at [0].
Kind regards,
Matthias van de Meent
Neon (
will release the seized parallel scan, if any.
> */
I don't understand the placement of that comment, as it's quite far
away from any parallel scan related code and it's very unrelated to
the index scan statistics.
If this needs to be added, I think I'd put it next to the call to
_bt_parallel_seize().
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
ss to the fields of the old
versions of updated tuples to correctly apply updates, thus requiring
a single snapshot for the full scan.
Maybe that's something that can be further improved upon, maybe not.
REPACK CONCURRENTLY is an improvement over the current situation
w.r.t. locks, but it
On Sat, 1 Feb 2025 at 06:01, Zhang Mingli wrote:
>
>
>
> Zhang Mingli
> www.hashdata.xyz
> On Jan 30, 2025 at 15:49 +0800, Matthias van de Meent <
boekewurm+postg...@gmail.com>, wrote:
>
> Hi,
>
> Thanks for your insights.
> While the buffer tag consumes a
On Fri, 31 Jan 2025 at 18:23, James Hunter wrote:
>
> On Wed, Jan 29, 2025 at 11:49 PM Matthias van de Meent
> wrote:
> >
> > Hi,
> >
> > Some time ago I noticed that every buffer table entry is quite large at 40
> > bytes (+8): 16 bytes of HASHELEMEN
On Thu, 30 Jan 2025 at 08:48, Matthias van de Meent
wrote:
>
> Together that results in the following prototype patchset.
Here's an alternative patch, which replaces dynahash in the buffer
lookup table with an open-coded replacement that has fewer
indirections during lookups, and a
patch with a new
approach to reducing the buffer lookup table's memory in [0], which
attempts to create a more cache-efficient hash table implementation.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
[0]
https://www.postgresql.org/message-id/CAEze2WiRo4Zu71jwxYmqjq6XK814Avf2-kytaL6n%3DBreZR2ZbA%40mail.gmail.com
8-
Do you have any documentation on the approaches used, and the specific
differences between v3 and v4? I don't see much of that in your
initial mail, and the patches themselves also don't show much of that
in their details. I'd like at least some documentation of the new
behaviour in src/backend/access/heap/README.HOT at some point before
this got marked as RFC in the commitfest app, though preferably sooner
rather than later.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
On Wed, 5 Feb 2025 at 02:22, Andres Freund wrote:
>
> Hi,
>
> On 2025-02-04 19:58:36 +0100, Matthias van de Meent wrote:
> > On Thu, 30 Jan 2025 at 08:48, Matthias van de Meent
> > wrote:
> > >
> > > Together that results in the following prototype pat
On Wed, 5 Feb 2025 at 02:14, Andres Freund wrote:
>
> Hi,
>
> On 2025-01-30 08:48:56 +0100, Matthias van de Meent wrote:
> > Some time ago I noticed that every buffer table entry is quite large at 40
> > bytes (+8): 16 bytes of HASHELEMENT header (of which the last 4 by
On Mon, 10 Feb 2025 at 20:11, Burd, Greg wrote:
> > On Feb 10, 2025, at 12:17 PM, Matthias van de Meent
> > wrote:
> >
> >>
> >> I have a few concerns with the patch, things I’d greatly appreciate your
> >> thoughts on:
> >>
> >>
On Tue, 11 Feb 2025 at 00:20, Nathan Bossart wrote:
>
> On Mon, Feb 10, 2025 at 06:17:42PM +0100, Matthias van de Meent wrote:
> > I have serious doubts about the viability of any proposal working to
> > implement PHOT/WARM in PostgreSQL, as they seem to have an inhe
On Tue, 7 Jan 2025 at 12:59, Tomas Vondra wrote:
>
> On 1/6/25 20:13, Matthias van de Meent wrote:
>> ...
>>>
>>> Thanks. Attached is a rebased patch series fixing those issues, and one
>>> issue I found in an AssertCheckGinBuffer, which was calling the ot
on-duplicating tuplesort code, meaning they can be used to detect
regressions in this patch (vs a non-UNIQUE index build with otherwise
the same definition).
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
[0]
https://postgr.es/m/flat/6ab4003f-a8b8-4d75-a67f-f25ad98582dc%40enterprisedb.com
ng issues when the table grows larger than 1 GB.
I expect that error to disappear when you replace the
dsa_allocate0(...) call in dshash.c's resize function with
dsa_allocate_extended(..., DSA_ALLOC_HUGE | DSA_ALLOC_ZERO) as
attached, but haven't tested it due to a
On Sat, 21 Dec 2024 at 01:05, Thomas Munro wrote:
>
> On Sat, Dec 21, 2024 at 11:41 AM Matthias van de Meent
> wrote:
> > The unlinking of forks in the FileTag infrastructure has been broken
> > since b0a55e43 in PG16,
> > while a segment number other than 0 has never
include the correct fork
number and segment number when there is a need to unlink
non-MAIN_FORKNUM or non-segno=0 files in mdunlinkfiletag.
Kind regards,
Matthias van de Meent
[0]
https://www.postgresql.org/message-id/flat/CAEze2WiWt%2B9%2BOnqW1g9rKz0gqxymmt%3Doe6pKAEDrutdfpDMpTw%40mail.gmail.com
13-15 will take a little bit more
effort due to code changes in PG16; though it'd probably still require
a relatively minor change.
Kind regards,
Matthias van de Meent.
Neon (https://neon.tech)
v1-0001-MD-smgr-Unlink-the-requested-file-segment-not-mai.patch
Description: Binary data
eve your understanding may be quite out of date. Not all planner
or executor features and optimizations are explicitly present in the
output of EXPLAIN, and the examples all indicate you may be working
with an outdated view of Postgres' capabilities.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
d size. This is different from smgrwrite, which tries to write
again when FileWriteV returns a short write. Should smgrextendv do
retries, too?
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
[0]
https://postgr.es/m/flat/CACAa4VJ%2BQY4pY7M0ECq29uGkrOygikYtao1UG9yCDFosxaps9g%40mail.gmai
on you
merge into that heap, it'll re-initialize the bulk writer, which will thus
overwrite the previous rewrites' pages. The pre-PG17 rewriteheap.c doesn't
use that API, and thus isn't affected.
I've CC-ed Heikki as author of that patch; maybe a new API to indicate bulk
On Fri, 22 Nov 2024 at 09:11, Erik Nordström wrote:
>
>
>
> On Fri, Nov 22, 2024 at 12:30 AM Matthias van de Meent
> wrote:
>>
>> On Thu, 21 Nov 2024, 17:18 Erik Nordström, wrote:
>>>
>>> Hello,
>>>
>>> I've noticed a change
dards,
we'd suddenly lose compatibility with the standard we said we
supported, which isn't a nice outlook. Compare that to RFCs, which
AFAIK don't change in specification once released.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
variables actually
support the unit conversion implied by the unit options.
Kind regards,
Matthias van de Meent
an?
If we hit the heap (due to ! VM_ALL_VISIBLE) and detected the heap
tuple was dead, why couldn't we mark it as dead in the index? IOS
assumes a high rate of all-visible pages, but it's hardly unheard of
to access pages with dead tuples.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
s that on
purpose and is the user expected to include both headers, or should
utils/memutils.h be included in utils/array.h?
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
t the internals of dynahash, rather than dynahash internally
optimizing usage based on a clearer picture of what the hash entry needs.
Does anyone have an idea on how to best benchmark this kind of patch, apart
from "running pgbench"? Other ideas on how to improve this? Specific
concerns?
clobbered by OpenSSL, that would be a good
explanation for these issues. Can you check this?
> The really weird thing is that the very same binaries work on a
> different host (arm64 VM provided by Huawei) - the
> postgresql_arm64.deb files compiled there and present on
> apt.postgresql.org are fine, but when installed on that graviton VM,
> they throw the above error.
If I were you, I'd start looking into the differences in behaviour of
OpenSSL between the two ARM-based systems you mention; particularly
with a focus on register contents. It looks like gdb's `i r ...`
command could help out with that - or so StackOverflow tells me.
Kind regards,
Matthias van de Meent
ld be allocated through a hook,
shmem_request_hook, and not through direct calls to
RequestAddinShmemSpace in _PG_init().
For specific info, see [0] and [1] which introduced
shmem_request_hook. PGPedia also has some info on how to deal with
older PG versions: [2]
I hope this helps.
Kind regards,
p here, as write tears
can happen at nearly any offset into the page - not just 8k intervals
- and so the page header is not always representative of the origins
of all bytes on the page - only the first 24 (if even that).
Kind regards,
Matthias van de Meent
ny indexed column is updated, even if we could detect that there were
no changes to any indexed values.
Actually, you could say we find ourselves in the counter-intuitive
situation that the addition of the 'hotblocking' index whose value
were not updated now caused index insertions into summarizing indexes.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
On Thu, 13 Feb 2025 at 19:46, Burd, Greg wrote:
>
> Attached find an updated patchset v5 that is an evolution of v4.
>
> Changes v4 to v5 are:
> * replaced GUC with table reloption called "expression_checks" (open to other
> name ideas)
> * minimal documentation updates to README.HOT to address c
as WIP and its
feature explicitly conflicts with my 0004.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
[0]
https://www.postgresql.org/message-id/CAEze2WhRFzd=nvh9YevwiLjrS1j1fP85vjNCXAab=iybz2r...@mail.gmail.com
v20250307-0004-Make-Gin-parallel-builds-use-a-single-tupl.
On Mon, 17 Mar 2025 at 23:51, Matthias van de Meent
wrote:
>
> On Tue, 11 Mar 2025 at 16:53, Peter Geoghegan wrote:
> >
> > On Sat, Mar 8, 2025 at 11:43 AM Peter Geoghegan wrote:
> > > I plan on committing this one soon. It's obviously pretty pointless to
>
> +}
> +break;/* pstate.ikey to be set to scalar key's
> ikey */
This code finds out that no tuple on the page can possibly match the
scankey (idxtup=scalar returns non-0 value) but doesn't (can't) use it
to exit the scan. I thin
On Wed, 2 Apr 2025 at 17:37, Andres Freund wrote:
>
> Hi,
>
> Matthias, any chance you can provide a rebased version for master?
Sure, I'll try to have it in your inbox later today CEST.
> Either way I'm planning to push this fairly soon.
OK, thanks!
Kind regards,
M
;d
at least save the io and WAL of the cleanup scan.
Any comments/suggestions? POC patch attached.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
v00-0001-WIP-Optimize-VACUUM-for-tables-with-only-summari.patch
Description: Binary data
's return.
I don't know much about the planner, but I would expect a RelOptInfo's
relids field to always contain at least one relid when it's not
currently being constructed; thus guaranteeing a non-negative result
when looking for the first bit (as indicated by "ne
On Tue, 1 Apr 2025 at 23:56, Matthias van de Meent
wrote:
>
> On Tue, 1 Apr 2025 at 04:02, Peter Geoghegan wrote:
> >
> > On Fri, Mar 28, 2025 at 5:59 PM Peter Geoghegan wrote:
> > > Attached is v32, which has very few changes, but does add a new patch:
> &g
en if it
isn't it removes the load of *numSkipArrayKeys from the hot path).
// utils/skipsupport.h, nbtutils.c
I think the increment/decrement callbacks for skipsupport should
explicitly check (by e.g. Assert) for NULL (or alternatively: same
value) returns on overflow, and the API definition sh
0003: LGTM
0004: LGTM, but note the current bug in 0001, which is probably best
solved with a fix that keeps this optimization in mind, too.
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
On Tue, 1 Apr 2025 at 21:02, Peter Geoghegan wrote:
>
> On Tue, Apr 1, 2025 at 10:40 AM Matthias van de Meent
> wrote:
> > > When nbtree is passed input scan keys derived from a
> > > query predicate "WHERE b = 5", new nbtree preprocessing steps now output
&
On Sun, 16 Mar 2025 at 13:55, vignesh C wrote:
>
> On Wed, 5 Mar 2025 at 16:43, Matthias van de Meent
> wrote:
> >
> > On Sun, 2 Mar 2025 at 01:35, Tom Lane wrote:
> > >
> > > Peter Geoghegan writes:
> > > > Is everybody in agreement about c
ably want to see only very durable data.
This would also unify the commit visibility order between primary and
secondary nodes, and would allow users to have session-level 'wait for
LSN x to be persistent' with much reduced lock times.
(CC-ed to Ants, given his interest in this topic)
Kind regards,
Matthias van de Meent
Neon (https://neon.tech)
.
Kind regards,
Matthias van de Meent
On Fri, 21 Mar 2025 at 17:14, Matthias van de Meent
wrote:
> Attached is v10, which polishes the previous patches, and adds a patch
> for nbtree to use the new visibility checking strategy so that it too
> can release its index pages much earlier, and adds a similar
> visibility c
601 - 700 of 725 matches
Mail list logo