On 3/9/23 19:00, Tomas Vondra wrote:
>
>
> On 3/9/23 01:30, Michael Paquier wrote:
>> On Thu, Mar 09, 2023 at 12:39:08AM +0100, Tomas Vondra wrote:
>>> IMO we should fix that. We have a bunch of buildfarm members running on
>>> Ubuntu 18.04 (or older) - i
On 3/8/23 23:31, Matthias van de Meent wrote:
> On Wed, 22 Feb 2023 at 14:14, Matthias van de Meent
> wrote:
>>
>> On Wed, 22 Feb 2023 at 13:15, Tomas Vondra
>> wrote:
>>>
>>> On 2/20/23 19:15, Matthias van de Meent wrote:
>>>> Thanks. B
On 3/14/23 11:34, Christoph Berg wrote:
> Re: Tomas Vondra
>> and I don't think there's a good place to inject the 'rm' so I ended up
>> adding a 'cleanup_cmd' right after 'compress_cmd'. But it seems a bit
>> strange / hacky. Maybe t
g
the API error-handling stuff.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 3/14/23 12:07, gkokola...@pm.me wrote:
>
>
> --- Original Message ---
> On Monday, March 13th, 2023 at 10:47 PM, Tomas Vondra
> wrote:
>
>
>
>>
>>> Change pg_fatal() to an assertion+comment;
>>
>>
>> Yeah, that's reaso
On 3/10/23 11:03, John Naylor wrote:
>
> On Wed, Mar 1, 2023 at 1:02 AM Tomas Vondra
> mailto:tomas.von...@enterprisedb.com>>
> wrote:
>> here's a rebased patch to make cfbot happy, dropping the first part that
>> is now unnecessary thanks to 7fe1aa991b.
&
On 3/16/23 18:04, gkokola...@pm.me wrote:
>
> --- Original Message ---
> On Tuesday, March 14th, 2023 at 4:32 PM, Tomas Vondra
> wrote:
>>
>> On 3/14/23 16:18, gkokola...@pm.me wrote:
>>
>>> ...> Would you mind me trying to come with a patch
On 3/16/23 01:20, Justin Pryzby wrote:
> On Mon, Mar 13, 2023 at 10:47:12PM +0100, Tomas Vondra wrote:
>>> Rearrange functions to their original order allowing a cleaner diff to the
>>> prior code;
>>
>> OK. I wasn't very enthusiastic about this initially,
On 3/16/23 23:58, Justin Pryzby wrote:
> On Thu, Mar 16, 2023 at 11:30:50PM +0100, Tomas Vondra wrote:
>> On 3/16/23 01:20, Justin Pryzby wrote:
>>> But try reading the diff while looking for the cause of a bug. It's the
>>> difference between reading 50, two-
errmsg);
>>> - pg_free(errmsg);
>>> + compress_spec.parse_error);
>>> + pg_free(compress_spec.parse_error);
>>> }
>>
>> The top-level error here is "does not support compression", but w
ich seems like a "proper" solution for
the future.
It getting that soon (in PG17) is unlikely, let's revive the rebalance
and/or spilling patches. Imperfect but better than nothing.
And then in the end we can talk about if/what can be backpatched.
FWIW I don't think there's a lot of rush, considering this is clearly a
matter for PG17. So the summer CF at the earliest, people are going to
be busy until then.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
cus on correctness.
>
Yes, that makes sense. There are far too many patches in this thread
already ...
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 3/17/23 06:53, John Naylor wrote:
> On Wed, Mar 15, 2023 at 7:51 PM Tomas Vondra
> mailto:tomas.von...@enterprisedb.com>>
> wrote:
>>
>>
>>
>> On 3/14/23 08:30, John Naylor wrote:
>> > I tried a couple toy examples with various combination
On 3/17/23 18:55, Tomas Vondra wrote:
>
> ...
>
> This however made me realize the initial sync of sequences may not be
> correct. I mean, the idea of tablesync is syncing the data in REPEATABLE
> READ transaction, and then applying decoded changes. But sequences are
> not
On 3/18/23 06:35, Amit Kapila wrote:
> On Sat, Mar 18, 2023 at 3:13 AM Tomas Vondra
> wrote:
>>
>> ...
>>
>> Clearly, for sequences we can't quite rely on snapshots/slots, we need
>> to get the LSN to decide what changes to apply/skip from somewhere el
On 3/20/23 04:42, Amit Kapila wrote:
> On Sat, Mar 18, 2023 at 8:49 PM Tomas Vondra
> wrote:
>>
>> On 3/18/23 06:35, Amit Kapila wrote:
>>> On Sat, Mar 18, 2023 at 3:13 AM Tomas Vondra
>>> wrote:
>>>>
>>>> ...
>>>>
>
On 3/19/23 20:31, Justin Pryzby wrote:
> On Fri, Mar 17, 2023 at 05:41:11PM +0100, Tomas Vondra wrote:
>>>> * Patch 2 is worth considering to backpatch
>>
>> I'm not quite sure what exactly are the numbered patches, as some of the
>> threads had a number of d
On 3/14/23 15:41, Matthias van de Meent wrote:
> On Tue, 14 Mar 2023 at 14:49, Tomas Vondra
> wrote:
>>
>>> ...
>
>> If you agree with these changes, I'll get it committed.
>
> Yes, thanks!
>
I've tweaked the patch per the last round of commen
On 3/20/23 12:00, Amit Kapila wrote:
> On Mon, Mar 20, 2023 at 1:49 PM Tomas Vondra
> wrote:
>>
>>
>> On 3/20/23 04:42, Amit Kapila wrote:
>>> On Sat, Mar 18, 2023 at 8:49 PM Tomas Vondra
>>> wrote:
>>>>
>>>> On 3/18/23 06:35, Ami
On 3/20/23 13:26, Amit Kapila wrote:
> On Mon, Mar 20, 2023 at 5:13 PM Tomas Vondra
> wrote:
>>
>> On 3/20/23 12:00, Amit Kapila wrote:
>>> On Mon, Mar 20, 2023 at 1:49 PM Tomas Vondra
>>> wrote:
>>>>
>>>>
>>>> I don
'd be much better to just not store the statistics target for
attributes that have it default (which we now identify by -1), or for
system attributes (where we store 0). I'd bet vast majority of systems
will just use the default / GUC value. So if we're interested in saving
these by
r, shouldn't we fix it to use get_error_func()?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL CompanyFrom 6a3d5d743f022ffcd0fcaf3d6e9ba711e2e785e7 Mon Sep 17 00:00:00 2001
From: Georgios Kokolatos
Date: Fri, 17 Mar 2023 14:45:58 +
Su
; As long as this touches pg_backup_directory.c you could update the
> header comment to refer to "compressed extensions", not just .gz.
>
> I noticed that EndCompressorLZ4() tests "if (LZ4cs)", but that should
> always be true.
>
I haven't done these two things. We can/should do that, but it didn't
fit into the three patches.
> I was able to convert the zstd patch to this new API with no issue.
>
Good to hear.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
;>already noticed that long before I was aware of this problem and
>> discussion:
>>
>>2019-07-10: «I do think that accounting for Buffile overhead when
>> estimating
>>the size of the hashtable during ExecChooseHashTableSize() so it can be
>>
On 3/20/23 20:54, Egor Rogov wrote:
> On 20.03.2023 22:27, Gregory Stark (as CFM) wrote:
>> On Sun, 22 Jan 2023 at 18:22, Tomas Vondra
>> wrote:
>>> I wonder if we have other functions doing something similar, i.e.
>>> accepting a polymorphic type and then
On 3/25/23 03:57, Andres Freund wrote:
> Hi,
>
> Starting with
>
> commit 7db0cd2145f2bce84cac92402e205e4d2b045bf2
> Author: Tomas Vondra
> Date: 2021-01-17 22:11:39 +0100
>
> Set PD_ALL_VISIBLE and visibility map bits in COPY FREEZE
>
That's a bumm
On 3/13/24 23:38, Thomas Munro wrote:
> On Sun, Mar 3, 2024 at 11:41 AM Tomas Vondra
> wrote:
>> On 3/2/24 23:28, Melanie Plageman wrote:
>>> On Sat, Mar 2, 2024 at 10:05 AM Tomas Vondra
>>> wrote:
>>>> With the current "master" code, ei
On 3/14/24 11:13, Peter Eisentraut wrote:
> On 12.03.24 14:32, Tomas Vondra wrote:
>> On 3/12/24 13:47, Peter Eisentraut wrote:
>>> On 06.03.24 22:34, Tomas Vondra wrote:
>>>> 0001
>>>>
>>>>
>>>> 1) I think this bit in ALT
On 3/14/24 20:14, Robert Haas wrote:
> On Tue, Feb 20, 2024 at 5:31 AM Tomas Vondra
> wrote:
>> I certainly agree that the current JIT costing is quite crude, and we've
>> all seen cases where the decision turns out to not be great. And I think
>> the plan to m
streaming read user in 0014.
>
Should I rerun the benchmarks with these new patches, to see if it
really helps with the regressions?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 3/15/24 03:21, David Rowley wrote:
> On Tue, 12 Mar 2024 at 23:57, Tomas Vondra
> wrote:
>> Attached is an updated version of the mempool patch, modifying all the
>> memory contexts (not just AllocSet), including the bump context. And
>> then also PDF with resul
ect, why would a single lock solve that? Yes,
we'd advance the iterators at the same time, but surely we'd not issue
the fadvise calls while holding the lock, and the prefetch/fadvise for a
particular block could still happen in different workers.
I suppose a dirty PoC fix should not be too difficult, and it'd allow us
to check if it works.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 3/17/24 17:38, Andres Freund wrote:
> Hi,
>
> On 2024-03-16 21:25:18 +0100, Tomas Vondra wrote:
>> On 3/16/24 20:12, Andres Freund wrote:
>>> That would address some of the worst behaviour, but it doesn't really seem
>>> to
>>> address th
On 3/17/24 20:36, Tomas Vondra wrote:
>
> ...
>
>> Besides a lot of other things, I finally added debugging fprintfs printing
>> the
>> pid, (prefetch, read), block number. Even looking at tiny excerpts of the
>> large amount of output that generates shows that
On 3/18/24 15:47, Melanie Plageman wrote:
> On Sun, Mar 17, 2024 at 3:21 PM Tomas Vondra
> wrote:
>>
>> On 3/14/24 22:39, Melanie Plageman wrote:
>>> On Thu, Mar 14, 2024 at 5:26 PM Tomas Vondra
>>> wrote:
>>>>
>>>> On 3/14/24 19:16,
On 2/22/24 03:45, Melanie Plageman wrote:
> Thanks so much for reviewing!
>
> On Fri, Feb 16, 2024 at 3:41 PM Tomas Vondra
> wrote:
>>
>> When I first read this, I immediately started wondering if this might
>> use the commit timestamp stuff we already have.
On 3/18/24 15:02, Daniel Gustafsson wrote:
>> On 22 Feb 2024, at 03:45, Melanie Plageman wrote:
>> On Fri, Feb 16, 2024 at 3:41 PM Tomas Vondra
>> wrote:
>
>>> - Not sure why we need 0001. Just so that the "estimate" functions in
>>> 0002 have
x27;s a reason why that doesn't work for copy_file_range? But
in that case this needs much clearer comments.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL CompanyFrom 39f42eee4c6f50d106672afe108294ee59082500 Mon Sep 17 00:00:00 2001
From: Tomas Vondra
On 3/18/24 16:55, Tomas Vondra wrote:
>
> ...
>
> OK, I've restarted the tests for only 0012 and 0014 patches, and I'll
> wait for these to complete - I don't want to be looking for patterns
> until we have enough data to smooth this out.
>
>
I now have
I haven't investigated this, but it seems to get broken by this patch:
v7-0009-Make-table_scan_bitmap_next_block-async-friendly.patch
I wonder if there are some additional changes aside from the rebase.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
nges, though).
I think to make this committable, this requires some review and testing,
ideally on a range of platforms.
One open question is how to allow testing this. For pg_upgrade we now
have PG_TEST_PG_UPGRADE_MODE, which can be set to e.g. "--clone". I
wonder if we should add PG_TEST_P
On 3/22/24 17:42, Robert Haas wrote:
> On Fri, Mar 22, 2024 at 10:40 AM Tomas Vondra
> wrote:
>> There's one question, though. As it stands, there's a bit of asymmetry
>> between handling CopyFile() on WIN32 and the clone/copy_file_range on
>> other platforms).
there a reliable way to say when the guarantees
actually apply? I mean, how would the administrator *know* it's safe to
set full_page_writes=off, or even better how could we verify this when
the database starts (and complain if it's not safe to disable FPW)?
It's easy to e.g.
On 3/22/24 19:40, Robert Haas wrote:
> On Fri, Mar 22, 2024 at 1:22 PM Tomas Vondra
> wrote:
>> Right, this will happen:
>>
>> pg_combinebackup: error: unable to use accelerated copy when manifest
>> checksums are to be calculated. Use --no-manifest
>&
t; that either calls pread/pwrite or
copy_file_range, depending on checksums and what was requested.
BTW is there a reason why the code calls "write" and not "pg_pwrite"?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 3/23/24 14:47, Tomas Vondra wrote:
> On 3/23/24 13:38, Robert Haas wrote:
>> On Fri, Mar 22, 2024 at 8:26 PM Thomas Munro wrote:
>>> Hmm, this discussion seems to assume that we only use
>>> copy_file_range() to copy/clone whole segment files, right? That's
n the benchmarks with v8, but unfortunately it crashes for
me very quickly (I've only seen 0015 to crash, so I guess the bug is in
that patch).
The backtrace attached, this doesn't seem right:
(gdb) p hscan->rs_cindex
$1 = 543516018
regards
--
Tomas Vondra
EnterpriseDB: h
On 3/24/24 18:38, Melanie Plageman wrote:
> On Sun, Mar 24, 2024 at 01:36:19PM +0100, Tomas Vondra wrote:
>>
>>
>> On 3/23/24 01:26, Melanie Plageman wrote:
>>> On Fri, Mar 22, 2024 at 08:22:11PM -0400, Melanie Plageman wrote:
>>>> On Tue, Mar 19, 20
On 3/24/24 21:12, Melanie Plageman wrote:
> On Sun, Mar 24, 2024 at 2:22 PM Tomas Vondra
> wrote:
>>
>> On 3/24/24 18:38, Melanie Plageman wrote:
>>> I haven't had a chance yet to reproduce the regressions you saw in the
>>> streaming read user patch or t
e should embrace it. Adding more and more special cases into
AllocSet seems to go directly against that idea, makes the code more
complex, and I don't quite see how is that better or easier to use than
having a separate BumpContext ...
Having an AllocSet that mixes chunks that may be freed and chunks that
can't be freed, and have a different context type in the chunk header,
seems somewhat confusing and "not great" for debugging, for example.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 3/26/24 15:09, Jakub Wartak wrote:
> On Sat, Mar 23, 2024 at 6:57 PM Tomas Vondra
> wrote:
>
>> On 3/23/24 14:47, Tomas Vondra wrote:
>>> On 3/23/24 13:38, Robert Haas wrote:
>>>> On Fri, Mar 22, 2024 at 8:26 PM Thomas Munro
>>>> wrote:
>
On 3/25/24 15:31, Robert Haas wrote:
> On Sat, Mar 23, 2024 at 9:37 AM Tomas Vondra
> wrote:
>> OK, that makes sense. Here's a patch that should work like this - in
>> copy_file we check if we need to calculate checksums, and either use the
>> requested copy method,
fore walsender is launched.
>>
>> One possible approach is to wait until the replication starts. Alternative
>> one is
>> to ease the condition.
>
> That's my suggestion too. I reused NUM_CONN_ATTEMPTS (that was renamed to
> NUM_ATTEMPTS in the first patch). See second patch.
>
Perhaps I'm missing something, but why is NUM_CONN_ATTEMPTS even needed?
Why isn't recovery_timeout enough to decide if wait_for_end_recovery()
waited long enough?
IMHO the test should simply pass PG_TEST_DEFAULT_TIMEOUT when calling
pg_createsubscriber, and that should do the trick.
Increasing PG_TEST_DEFAULT_TIMEOUT is what buildfarm animals doing
things like ubsan/valgrind already use to deal with exactly this kind of
timeout problem.
Or is there a deeper problem with deciding if the system is in recovery?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 3/26/24 21:17, Euler Taveira wrote:
> On Tue, Mar 26, 2024, at 4:12 PM, Tomas Vondra wrote:
>> Perhaps I'm missing something, but why is NUM_CONN_ATTEMPTS even needed?
>> Why isn't recovery_timeout enough to decide if wait_for_end_recovery()
>> waited long en
*exact* numbers, and it should be exactly the same for all
the analyze runs. So how come it changes like this?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
ect the page
to already be in memory, thanks to the gap). But with the combine limit
set to 32, is this still true?
I've tried going through read_stream_* to determine how this will
behave, but read_stream_look_ahead/read_stream_start_pending_read does
not make this very clear. I'l
On 3/27/24 20:37, Melanie Plageman wrote:
> On Mon, Mar 25, 2024 at 12:07:09PM -0400, Melanie Plageman wrote:
>> On Sun, Mar 24, 2024 at 06:37:20PM -0400, Melanie Plageman wrote:
>>> On Sun, Mar 24, 2024 at 5:59 PM Tomas Vondra
>>> wrote:
>>>
On 3/28/24 21:45, Robert Haas wrote:
> On Tue, Mar 26, 2024 at 2:09 PM Tomas Vondra
> wrote:
>> The patch I shared a couple minutes ago should fix this, effectively
>> restoring the original debug behavior. I liked the approach with calling
>> strategy_implementation
loop
with index scan is missing these rows.
They might try a couple things:
1) set enable_nestloop=off, see if the results get correct
2) try bt_index_check on i_39773, might notice some corruption
3) rebuild the index
If it's not this, they'll need to build a reproducer. It's really
difficult to deduce what's going on just from query plans for different
parameter values.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 3/29/24 02:12, Thomas Munro wrote:
> On Fri, Mar 29, 2024 at 10:43 AM Tomas Vondra
> wrote:
>> I think there's some sort of bug, triggering this assert in heapam
>>
>> Assert(BufferGetBlockNumber(hscan->rs_cbuf) == tbmres->blockno);
>
> Thanks for
On 3/29/24 23:03, Thomas Munro wrote:
> On Sat, Mar 30, 2024 at 10:39 AM Thomas Munro wrote:
>> On Sat, Mar 30, 2024 at 4:53 AM Tomas Vondra
>> wrote:
>>> ... Maybe there should be some flag to force
>>> issuing fadvise even for sequential patterns, pe
y be wrong
but I don't think the regular RA (in linux kernel) works for ZFS, right?
I was wondering if we could use this (posix_fadvise) to improve that,
essentially by issuing fadvise even for sequential patterns. But now
that I think about that, if posix_fadvise works since 2.2, maybe RA
works too now?)
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 3/31/24 03:03, Thomas Munro wrote:
> On Sun, Mar 31, 2024 at 1:37 PM Tomas Vondra
> wrote:
>> So I decided to take a stab at Thomas' idea, i.e. reading the data to
>> ...
>> I'll see how this works on EXT4/ZFS next ...
>
> Wow, very cool! A couple of
On 3/31/24 06:46, Thomas Munro wrote:
> On Sun, Mar 31, 2024 at 5:33 PM Tomas Vondra
> wrote:
>> I'm on 2.2.2 (on Linux). But there's something wrong, because the
>> pg_combinebackup that took ~150s on xfs/btrfs, takes ~900s on ZFS.
>>
>> I'm not
t out soonish...)
>
It's entirely possible I'm just too stupid and it works just fine for
everyone else. But maybe not, and I'd say an implementation that is this
difficult to configure is almost as if it didn't exist at all. The linux
read-ahead works by default pretty great.
So I don't see how to make this work without explicit prefetch ... Of
course, we could also do no prefetch and tell users it's up to ZFS to
make this work, but I don't think it does them any service.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
against the current master. 0006
> is not much relevant with current patch, and I think it can be committed
> individually if you are OK with that.
>
> Hope this kind of review is helpful.
>
Cool! There's obviously no chance to get this into v18, and I have stuff
to do in this CF. But I'll take a look after that.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
)-(3). If (4) and (5) get it, that's a bonus, but even without
that I don't think the performance is an issue - everything has a cost.
On 4/3/24 15:39, Jakub Wartak wrote:
> On Mon, Apr 1, 2024 at 9:46 PM Tomas Vondra
> wrote:
>>
>> Hi,
>>
>> I've been
On 4/4/24 12:25, Jakub Wartak wrote:
> On Thu, Apr 4, 2024 at 12:56 AM Tomas Vondra
> wrote:
>>
>> Hi,
>>
>> Here's a much more polished and cleaned up version of the patches,
>> fixing all the issues I've been aware of, and with various parts merge
On 4/4/24 19:38, Robert Haas wrote:
> Hi,
>
> Yesterday, Tomas Vondra reported to me off-list that he was seeing
> what appeared to be data corruption after taking and restoring an
> incremental backup. Overnight, Jakub Wartak further experimented with
> Tomas's test
On 4/6/24 01:53, Melanie Plageman wrote:
> On Fri, Apr 05, 2024 at 04:06:34AM -0400, Melanie Plageman wrote:
>> On Thu, Apr 04, 2024 at 04:35:45PM +0200, Tomas Vondra wrote:
>>>
>>>
>>> On 4/4/24 00:57, Melanie Plageman wrote:
>>>> On Sun, Mar 31,
On 4/6/24 02:51, Tomas Vondra wrote:
>
> * The one question I'm somewhat unsure about is why Tom chose to use the
> "wrong" recheck flag in the 2017 commit, when the correct recheck flag
> is readily available. Surely that had a reason, right? But I can't
On 4/6/24 15:40, Melanie Plageman wrote:
> On Sat, Apr 06, 2024 at 02:51:45AM +0200, Tomas Vondra wrote:
>>
>>
>> On 4/6/24 01:53, Melanie Plageman wrote:
>>> On Fri, Apr 05, 2024 at 04:06:34AM -0400, Melanie Plageman wrote:
>>>> On Thu, Apr 04, 20
it between v17 and v18,
because even if heapam did not use the iterator, what if some other AM
uses it? Without 0012 it'd be a problem for the AM, no?
Would it make sense to move 0009 before these three patches? That seems
like a meaningful change on it's own, right?
FWIW I don't thi
On 4/7/24 06:17, Melanie Plageman wrote:
> On Sun, Apr 07, 2024 at 02:27:43AM +0200, Tomas Vondra wrote:
>> On 4/6/24 23:34, Melanie Plageman wrote:
>>> ...
>>>>
>>>> I realized it makes more sense to add a FIXME (I used XXX. I'm not when
>
On 4/7/24 15:11, Melanie Plageman wrote:
> On Sun, Apr 7, 2024 at 7:38 AM Tomas Vondra
> wrote:
>>
>> On 4/7/24 06:17, Melanie Plageman wrote:
>>>> What bothers me on 0006-0008 is that the justification in the commit
>>>> messages is "future commi
On 4/7/24 16:24, Melanie Plageman wrote:
> On Sun, Apr 7, 2024 at 7:38 AM Tomas Vondra
> wrote:
>>
>>
>>
>> On 4/7/24 06:17, Melanie Plageman wrote:
>>> On Sun, Apr 07, 2024 at 02:27:43AM +0200, Tomas Vondra wrote:
>>>> On 4/6/24 23:34
On 4/7/24 19:46, Tomas Vondra wrote:
> On 4/5/24 21:43, Tomas Vondra wrote:
>> Hi,
>>
>> ...
>>
>> 2) The prefetching is not a huge improvement, at least not for these
>> three filesystems (btrfs, ext4, xfs). From the color scale it might seem
>> like i
gated, but I'd considering it works on 64-bit, I guess
it's not considering alignment somewhere. I can dig more if needed.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Companywarning: Can't open file /SYSV003414ed (deleted) during
On 4/7/24 22:35, Tomas Vondra wrote:
> On 4/7/24 14:37, David Rowley wrote:
>> On Sun, 7 Apr 2024 at 22:05, John Naylor wrote:
>>>
>>> On Sat, Apr 6, 2024 at 7:37 PM David Rowley wrote:
>>>>
>>> I'm planning on pushing these, pending a
On 4/7/24 23:09, Andres Freund wrote:
> Hi,
>
> On 2024-04-07 22:35:47 +0200, Tomas Vondra wrote:
>> I haven't investigated, but I'd considering it works on 64-bit, I guess
>> it's not considering alignment somewhere. I can dig more if needed.
>
>
new patch versions? I don't think so. (It anything, it does not seem
fair to expect others to do last-minute reviews under pressure.)
Maybe it'd be better to start by expanding the existing rule about not
committing patches introduced for the first time in the last CF. What
On 4/8/24 17:48, Matthias van de Meent wrote:
> On Mon, 8 Apr 2024 at 17:21, Tomas Vondra
> wrote:
>>
>> ...
>>
>> For me the main problem with the pre-freeze crush is that it leaves
>> pretty much no practical chance to do meaningful review/testing, an
On 4/8/24 21:32, Jelte Fennema-Nio wrote:
> On Mon, 8 Apr 2024 at 20:15, Tomas Vondra
> wrote:
>> I 100% understand how frustrating the lack of progress can be, and I
>> agree we need to do better. I tried to move a number of stuck patches
>> this CF, and I hope (and
esting, with more and more complex
workloads etc. and I'll let keep doing that for a while.
Maybe I'm a bit too happy-go-lucky, but IMO the risk here is limited.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
everything before combining the
backups is not very convenient. Reading and writing the tar would make
this simpler.
> Around the same time, Tomas Vondra tested incremental backups with a
> cluster where he enabled checksums after taking the previous full
> backup. After combining the b
On 4/9/24 11:25, Matthias van de Meent wrote:
> On Mon, 8 Apr 2024 at 20:15, Tomas Vondra
> wrote:
>>
>>
>> On 4/8/24 17:48, Matthias van de Meent wrote:
>>> On Mon, 8 Apr 2024 at 17:21, Tomas Vondra
>>> wrote:
>>>>
>>>> Mayb
On 4/9/24 01:33, Michael Paquier wrote:
> On Tue, Apr 09, 2024 at 01:16:02AM +0200, Tomas Vondra wrote:
>> I don't feel too particularly worried about this. Yes, backups are super
>> important because it's often the only thing you have left when things go
>> wrong
h 1024 partitions it still takes only 38 backends to get 50%
chance of a collision. Better, but considering we now have hundreds of
cores, not sure if sufficient.
(Obviously, we probably want much lower probability of a collision, I
only used 50% to illustrate the changes).
regards
--
Tomas
On 1/8/24 16:51, Alvaro Herrera wrote:
> On 2023-Dec-12, Tomas Vondra wrote:
>
>> I propose we do a much simpler thing instead - allow the cache may be
>> initialized / cleaned up repeatedly, and make sure it gets reset at
>> convenient place (typically after index_in
r. And this seems to move it to lower layers again ...
> Also, my implementation does not yet have the optimization Tomas does
> to skip prefetching recently prefetched blocks. As he has said, it
> probably makes sense to add something to do this in a lower layer --
> such as in the streaming read API or even in bufmgr.c (maybe in
> PrefetchSharedBuffer()).
>
I agree this should happen in lower layers. I'd probably do this in the
streaming read API, because that would define "scope" of the cache
(pages prefetched for that read). Doing it in PrefetchSharedBuffer seems
like it would do a single cache (for that particular backend).
But that's just an initial thought ...
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 2/13/24 20:54, Peter Geoghegan wrote:
> On Tue, Feb 13, 2024 at 2:01 PM Tomas Vondra
> wrote:
>> On 2/7/24 22:48, Melanie Plageman wrote:
>> I admit I haven't thought about kill_prior_tuple until you pointed out.
>> Yeah, prefetching separates (de-synchronizes) the
ve a more sophisticated way to decide when to kill tuples and unpin
the index page (instead of just doing it when moving to the next index page)
Maybe that's what you meant by "more sophisticated bookkeeping", ofc.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 2/13/24 17:37, Robert Haas wrote:
> On Sun, Jan 28, 2024 at 1:07 AM Tomas Vondra
> wrote:
>> Right, locks + apply in commit order gives us this guarantee (I can't
>> think of a case where it wouldn't be the case).
>
> I couldn't find any cases of
On 2/15/24 07:50, Andrei Lepikhov wrote:
> On 18/12/2023 19:53, Tomas Vondra wrote:
>> On 12/18/23 11:40, Richard Guo wrote:
>> The challenge is where to get usable information about correlation
>> between columns. I only have a couple very rought ideas of what might
>&
le way to improve
this in v2.
And I don't have answer to that :-( I got completely lost in the ongoing
discussion about the locking implications (which I happily ignored while
working on the PoC patch), layering tensions and questions which part
should be "in control".
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 2/15/24 13:45, Andrei Lepikhov wrote:
> On 15/2/2024 18:10, Tomas Vondra wrote:
>>
>>
>> On 2/15/24 07:50, Andrei Lepikhov wrote:
>>> On 18/12/2023 19:53, Tomas Vondra wrote:
>>>> On 12/18/23 11:40, Richard Guo wrote:
>>>> The challeng
On 2/15/24 17:42, Peter Geoghegan wrote:
> On Thu, Feb 15, 2024 at 9:36 AM Tomas Vondra
> wrote:
>> On 2/15/24 00:06, Peter Geoghegan wrote:
>>> I suppose that it might be much more important than I imagine it is
>>> right now, but it'd be nice to have some
On 2/15/24 05:16, Robert Haas wrote:
> On Wed, Feb 14, 2024 at 10:21 PM Tomas Vondra
> wrote:
>> The way I think about non-transactional sequence changes is as if they
>> were tiny transactions that happen "fully" (including commit) at the LSN
>> where the LS
t think we make many promises about compatibility in
this regard ... it's probably better to always compare results only from
the same pgbench version, I guess.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
we should use a
similar naming convention pg_basetypeof()?
2) I was going to suggest using "any" argument, just like pg_typeof, but
I see 0002 patch already does that. Thanks!
3) I think the docs probably need some formatting - wrapping lines (to
make it consistent with the nearby stuf
1501 - 1600 of 4061 matches
Mail list logo