On Mon, Jun 2, 2025 at 8:15 PM Masahiko Sawada wrote:
>
> I think that this issue presents since commit c120550edb86 but this
> commit optimized the vacuum work for tables with no indexes and wasn't
> intended to change the FSM vacuum behavior for such tables. Therefore,
> I think we can fix the F
On Tue, Jun 3, 2025 at 2:00 PM Peter Geoghegan wrote:
>
> On Mon, Jun 2, 2025 at 4:00 PM Melanie Plageman
> wrote:
> > Perhaps I could keep track of the newest modified xid or some such
> > thing that is the newer of the newest removed xmax and newest frozen
> > xmin.
On Mon, Jun 2, 2025 at 3:40 PM Peter Geoghegan wrote:
>
> On Mon, Jun 2, 2025 at 3:30 PM Melanie Plageman
> wrote:
>
> > But, for cases when a few tuples are frozen on the page, it seems like it
> > could
> > be worth it?
>
> In general, I don't expect
On Mon, Jun 2, 2025 at 3:05 PM Peter Geoghegan wrote:
>
> On Mon, Jun 2, 2025 at 2:50 PM Melanie Plageman
> wrote:
> > As you've explained, it will always be <= OldestXmin. And, if the
> > record only freezes tuples (meaning it makes no other changes to the
>
On Mon, Jun 2, 2025 at 3:05 PM Peter Geoghegan wrote:
>
> On Mon, Jun 2, 2025 at 2:50 PM Melanie Plageman
> wrote:
> > As you've explained, it will always be <= OldestXmin. And, if the
> > record only freezes tuples (meaning it makes no other changes to the
>
On Thu, May 29, 2025 at 11:37 AM Peter Geoghegan wrote:
>
> Your concern is that the horizon might be overly aggressive/too
> conservative. But your patch (for 16) makes us take the
> don't-use-snapshotConflictHorizon-twice block *less* frequently (and
> the "use OldestXmin conservatively" block *
On Mon, Jun 2, 2025 at 10:31 AM Peter Geoghegan wrote:
>
> On Mon, Jun 2, 2025 at 10:27 AM Melanie Plageman
> wrote:
> > Attached what I plan to push shortly.
>
> Looks good to me.
Thanks! pushed.
part because I understood the temptation to do that.
That sounds right.
> > I'll push the fix tomorrow.
>
> Cool.
Attached what I plan to push shortly.
- Melanie
From 731dc1ef1285e167d877992eab64ebd644599926 Mon Sep 17 00:00:00 2001
From: Melanie Plageman
Date: Mon, 2 Jun 202
On Sun, Jun 1, 2025 at 12:07 PM Peter Geoghegan wrote:
>
> Commit 052026c9b9 made heap_vacuum_rel call RelationGetNumberOfBlocks
> before it calls vacuum_get_cutoffs -- it swapped the order. This is
> wrong, as explained by an intact comment above the call to
> vacuum_get_cutoffs.
>
> In short, th
On Fri, May 30, 2025 at 6:10 PM Peter Geoghegan wrote:
>
> On Fri, May 30, 2025 at 5:57 PM Melanie Plageman
> wrote:
> > I don't see how OldestXmin comes into play with the visibility_cutoff_xid.
>
> Code in heap_page_is_all_visible() (and other place, I guess the oth
On Fri, May 30, 2025 at 5:34 PM Peter Geoghegan wrote:
>
> On Fri, May 30, 2025 at 5:16 PM Melanie Plageman
> wrote:
>
> > You're right. I forgot that the visibility_cutoff_xid will be older
> > than OldestXmin when all the tuples are going to be frozen.
>
>
On Thu, May 29, 2025 at 11:37 AM Peter Geoghegan wrote:
>
> Your concern is that the horizon might be overly aggressive/too
> conservative. But your patch (for 16) makes us take the
> don't-use-snapshotConflictHorizon-twice block *less* frequently (and
> the "use OldestXmin conservatively" block *
#x27;m not 100% certain of my analysis given that this is a complicated
area of the code. I've attached suggested fixes for master/17 and 16
and am interested in what others think.
- Melanie
From fee539ddc97f1d58a883d37fcecea94af1261e8d Mon Sep 17 00:00:00 2001
From: Melanie Plageman
Date: Thu,
On Wed, May 28, 2025 at 10:49 PM Bruce Momjian wrote:
>
> On Wed, May 28, 2025 at 08:07:20PM -0400, Melanie Plageman wrote:
>
> Yes, I can now see it is two items so I have split it into two in the
> attached, applied patch. In a separate commit I adjusted the docs for
> log_c
On Thu, May 1, 2025 at 10:44 PM Bruce Momjian wrote:
>
> I will continue improving it until beta 1, and until the final release.
Hi Bruce,
Thanks so much for putting these together.
For the item:
"Increase the logging granularity of server variable log_connections"
I noticed that you cite the
On Sun, May 25, 2025 at 6:45 AM Dilip Kumar wrote:
>
> IMHO, if we include snapshot conflict horizon in cases where it is not
> necessary, don't you think it will impact performance on standby?
> because now it has to loop through the procarray on standby to check
> whether there is any conflict b
On Fri, May 23, 2025 at 12:41 PM Masahiko Sawada wrote:
>
> I'll remove that part and push early next week, barring any objections.
Great, thanks so much!
- Melanie
On Fri, May 23, 2025 at 12:04 PM Andres Freund wrote:
>
> > 2) if our inclusion of a cutoff_xid when freezing tuples is what makes
> > it safe to omit it from the VM update, then wouldn't that be true if
> > we included a cutoff_xid when pruning a page in a way that rendered it
> > all-visible too
Hi,
I'm trying to understand when the visibility map WAL record
(xl_heap_visible) needs to include a snapshot conflict horizon.
Currently, when emitting a xl_heap_visible record after phase I of
vacuum, we include a snapshot conflict horizon if the page is being
newly set all-visible in the VM.
W
On Wed, May 21, 2025 at 5:20 PM Jacob Champion
wrote:
>
> I took a quick look at the authentication and oauth_validator changes.
> Maybe 007_pre_auth.pl should include `receipt`, to make it easier to
> debug pre-auth hangs using the logs? Other than that, the changes
> looked reasonable to me.
Co
On Thu, May 22, 2025 at 4:07 PM Masahiko Sawada wrote:
>
> Agreed. I've updated the patch. Does this address your comments?
Yep. LGTM.
I'd probably just remove the parenthetical remark about 20% from the
commit message since that only applies to the success cap and
referencing both the success a
On Wed, May 21, 2025 at 6:11 PM Masahiko Sawada wrote:
>
> > if (vacrel->eager_scan_remaining_successes > 0)
> > vacrel->eager_scan_remaining_successes--;
>
> I've attached a patch that uses this idea. Feedback is very welcome.
Thanks for writing the patch!
I actually think we have the same
On Tue, May 20, 2025 at 11:16 AM Melanie Plageman
wrote:
>
> In earlier versions of my patch, I played around with replacing these
> references in the docs. I ended up not doing it because I wasn't sure we had
> consensus on deprecating the "on", "true&q
On Tue, May 20, 2025 at 6:59 PM Masahiko Sawada
wrote:
> While I haven't identified how exactly read stream is related to this
> issue, what I've observed through debugging this issue is, during a
> single read_stream_next_buffer() call, I observed that
> heap_vac_scan_next_block() is invoked mul
On Tue, May 20, 2025 at 5:23 PM Masahiko Sawada
wrote:
>
> On Fri, May 2, 2025 at 11:59 AM Masahiko Sawada
> wrote:
> >
> > However, there is a possibility that we have already eagerly scanned
> > another page and returned it to the read stream before we freeze the
> > eagerly-scanned page and d
On Thu, Apr 10, 2025 at 12:00 PM Tom Lane wrote:
> Melanie recently committed a patch (9219093ca) that purports to
> generalize our log_connections logging ability:
>
> Convert the boolean log_connections GUC into a list GUC comprised of
> the
> connection aspects to log.
>
> This giv
On Tue, May 20, 2025 at 4:52 AM Peter Eisentraut
wrote:
> log_connections has been changed from a Boolean parameter to a string
> one, but a number of places in the documentation and in various pieces
> of test code still use the old values. I think it would be better if
> they were adjusted to
On Mon, Feb 3, 2025 at 12:37 AM Sami Imseih wrote:
>
> Besides that, I think this is ready for committer.
I started looking at this, and I like the idea.
A few comments: I don't understand what 0002 is. For starters, the
commit message says something about pg_stat_database, and there are no
chan
Hi,
Over the course of the last two releases, we have added many read
stream users. Each user specifies any number of flags (defined at the
top of read_stream.h) which govern different aspects of the read
stream behavior.
There are a few inconsistencies (many caused by me) that I want to
iron out
On Sun, Apr 6, 2025 at 4:15 PM Andres Freund wrote:
>
> I think we should consider increasing BAS_BULKREAD TO something like
> Min(256, io_combine_limit * (effective_io_concurrency + 1))
Do you mean Max? If so, this basically makes sense to me.
Overall, I think even though the ring is about reu
On Fri, Apr 4, 2025 at 6:07 PM Masahiko Sawada wrote:
>
> I'm going to push this fix up to HEAD and v17 early next week, unless
> there is no objection.
I started studying this again looking back at the thread associated
with commit c120550edb86. There was actually a long discussion about
how thi
On Mon, Apr 7, 2025 at 5:15 AM David Rowley wrote:
> Are these failures from patches applied to master prior to 3ba2cdaa?
Yea, my email was held in moderation for days. I guess cross-posting is
flagged. I thought I saw people regularly cc pgsql-hackers when replying to
pgsql-committers, but I g
On Fri, Apr 4, 2025 at 9:17 AM Alexander Korotkov
wrote:
>
> Convert 'x IN (VALUES ...)' to 'x = ANY ...' then appropriate
>
> This commit implements the automatic conversion of 'x IN (VALUES ...)' into
> ScalarArrayOpExpr. That simplifies the query tree, eliminating the appearance
> of an unnece
On Sat, Apr 5, 2025 at 2:02 AM Konstantin Knizhnik wrote:
>
> A more targeted solution to your specific problem would be to update
> the visibility map on access. Then, the first time you have to fetch
> that heap page, you could mark it all-visible (assuming the long
> running transaction has end
On Sun, Apr 6, 2025 at 1:02 AM Masahiko Sawada wrote:
>
> The eager freeze scan is the pre-existing feature but it's pretty new
> code that was pushed just a couple months ago. I didn't want to make
> the newly introduced code complex further in one major release
> especially if it's in a vacuum a
On Fri, Apr 4, 2025 at 5:35 PM Masahiko Sawada wrote:
>
> > I haven't looked closely at this version but I did notice that you do
> > not document that parallel vacuum disables eager scanning. Imagine you
> > are a user who has set the eager freeze related table storage option
> > (vacuum_max_eage
there somewhere) subdirectories.
- Melanie
From 06689ada5011e1abeadaa0d33538e0b4069e5723 Mon Sep 17 00:00:00 2001
From: Melanie Plageman
Date: Thu, 20 Mar 2025 16:09:41 -0400
Subject: [PATCH v10 2/2] test backtrack
---
src/backend/access/nbtree/nbtree.c| 4 +
src/test/modules/test
On Tue, Apr 1, 2025 at 7:21 AM Nazir Bilal Yavuz wrote:
>
> On Tue, 1 Apr 2025 at 05:14, Melanie Plageman
> wrote:
> >
> +1 for using the functions. I think it is hard to follow / maintain
> this with the do-while loops and goto statements.
I'll take a look at your do
oesn't belong
and is a confusing distraction for future readers.
- Melanie
From 11f706609998aebbeac06e282a6e1d25d3f38fdd Mon Sep 17 00:00:00 2001
From: Melanie Plageman
Date: Thu, 3 Apr 2025 12:47:19 -0400
Subject: [PATCH v14 1/4] Fix autoprewarm neglect of tablespaces
While prewarming blocks
On Sun, Mar 23, 2025 at 4:46 AM Masahiko Sawada wrote:
>
> If we use ParallelBlockTableScanDesc with streaming read like the
> patch did, we would also need to somehow rewind the number of blocks
> allocated to workers. The problem I had with such usage was that a
> parallel vacuum worker allocate
On Mon, Mar 31, 2025 at 2:58 PM Nazir Bilal Yavuz wrote:
>
> Do you think that I should continue to
> attach both approaches?
No, for now let's try and get this approach to a good place and then
see which one we like.
I think there might be another problem with the code. We only set
cur_database
On Sun, Mar 30, 2025 at 10:01 AM Nazir Bilal Yavuz wrote:
>
> > Some review feedback on your v4: I don't think we need the
> > rs_have_free_buffer per_buffer_data. We can just check
> > have_free_buffers() in both the callback and main loop in
> > autoprewarm_database_main(). I also think you want
On Mon, Mar 31, 2025 at 12:27 PM Nazir Bilal Yavuz wrote:
>
> I worked on an alternative approach, I refactored code a bit. It does
> not traverse the list two times and I think the code is more suitable
> to use read streams now. I simply get how many blocks are processed by
> read streams and mo
On Sun, Mar 23, 2025 at 1:02 PM Andrey Borodin wrote:
>
> There's a BF failure with just these changes [0]. But IMO it's unrelated.
> There are 2 failed tests:
> 1. 'activeslot slot invalidation is logged with vacuum on pg_authid' is very
> similar to what is discussed here [1]
> 2. '+ERROR: tupl
On Mon, Mar 31, 2025 at 3:45 PM Melanie Plageman
wrote:
>
> Whoops, this isn't right. It does work. I'm going to draft a version
> suggesting slightly different variable naming and a couple comments to
> make this more clear.
Okay, so after further study, I think there are
On Wed, Apr 2, 2025 at 1:20 PM Nazir Bilal Yavuz wrote:
>
> On Wed, 2 Apr 2025 at 18:54, Melanie Plageman
> wrote:
> >
> > On Wed, Apr 2, 2025 at 6:26 AM Nazir Bilal Yavuz wrote:
> > >
> I don't have an example code right now. But what I mean is we may cal
On Fri, Apr 4, 2025 at 3:27 PM Konstantin Knizhnik wrote:
>
> From logical point of view I agree with you: taken in account number of
> inserted tuples makes sense if it allows to mark page as all-visible.
> So `ins_since_vacuum` should be better renamed to
> `ins_all_visible_since_vacuum` and c
On Fri, Apr 4, 2025 at 4:17 AM Nazir Bilal Yavuz wrote:
>
> Same on my end, v14 LGTM.
Cool. Pushed and marked as such in the CF app.
- Melanie
On Mon, Mar 31, 2025 at 3:27 PM Melanie Plageman
wrote:
>
> I think there might be another problem with the code. We only set
> cur_database in the loop in autoprewarm_databas_main() when it is 0
>
> if (cur_database != blk->database)
> {
>
On Tue, Apr 1, 2025 at 5:30 PM Masahiko Sawada wrote:
>
>
> I've attached the new version patch. There are no major changes; I
> fixed some typos, improved the comment, and removed duplicated codes.
> Also, I've updated the commit messages.
I haven't looked closely at this version but I did notic
On Fri, Apr 4, 2025 at 10:04 AM Heikki Linnakangas wrote:
>
> In apw_load_buffers(), we also load the file into (DSM) memory. There's
> no similar 1 GB limit in dsm_create(), but I think it's a bit
> unfortunate that the array needs to be allocated upfront upon loading.
Unrelated to this problem,
On Fri, Apr 4, 2025 at 1:53 AM Konstantin Knizhnik wrote:
>
> What is needed to reproduce the problem?
> 1. Table with populated data
> 2. Presence of transaction with assigned XID which prevents vacuum from
> marking pages of this table as all visible
> 3. Vacuum or autovacuum processed this tabl
On Thu, Apr 3, 2025 at 4:37 PM Sami Imseih wrote:
>
> From what I can tell in your example, you ran the manual vacuum ( session 1)
> while you had an open transaction (session 2), so vacuum could not remove
> the dead tuples or update the visibility map. Once you committed session 2,
> autovacuum
t's reached the end of the current
> relation. I believe it's possible to have two different relations with
> the same relnumber, in different tablespaces.
Good catch. I've included a fix for this in the attached set (0001)
- Melanie
From 9d5e4bf6a05d0a3f2838dd9efa8715b05be77423 Mon
On Wed, Apr 2, 2025 at 8:25 PM Melanie Plageman
wrote:
>
> On Wed, Apr 2, 2025 at 3:34 PM Melanie Plageman
> wrote:
> > attached v10
>
> Attached v11 has an assortment of cosmetic updates.
Attached v12 fixes a bug Bilal found off-list in 0002 related to
handling invalid bl
On Wed, Apr 2, 2025 at 3:34 PM Melanie Plageman
wrote:
> attached v10
Attached v11 has an assortment of cosmetic updates.
- Melanie
From 3b1657a0ba51299fa0384a0032759f3661a2baca Mon Sep 17 00:00:00 2001
From: Melanie Plageman
Date: Mon, 31 Mar 2025 22:02:25 -0400
Subject: [PATCH v11
On Wed, Apr 2, 2025 at 6:26 AM Nazir Bilal Yavuz wrote:
>
> On Wed, 2 Apr 2025 at 01:36, Melanie Plageman
> wrote:
> >
> > I know you mentioned off-list that you don't like the handling of
> > global objects in my version, but I prefer doing it this way (even
&g
arted looking
at the last patch you've been sending that is about checking the
freelist. I'll have to do that next.
- Melanie
From 2d14f37ea824a6ce3605f53b0a9c5622f08bf6e7 Mon Sep 17 00:00:00 2001
From: Melanie Plageman
Date: Mon, 31 Mar 2025 22:02:25 -0400
Subject: [PATCH 1/2] Refactor
On Mon, Mar 31, 2025 at 6:03 PM Masahiko Sawada wrote:
>
> With commit c120550edb86, If we got the cleanup lock on the page,
> lazy_scan_prune() marks dead item IDs directly to LP_UNUSED. So the
> following check with has_lpdead_items made the periodic FSM vacuum in
> the one-pass strategy vacuum
ou both were envisioning, though.
- Melanie
From 79d7e930de510cad6aef31532b06ba679a72d94a Mon Sep 17 00:00:00 2001
From: Melanie Plageman
Date: Sun, 30 Mar 2025 14:03:55 -0400
Subject: [PATCH] Add errcontext for processing I/Os for another backend
Push an ErrorContextCallback adding additional
On Sat, Mar 29, 2025 at 4:44 PM Andres Freund wrote:
>
> How about having an iterator function operating on a pointer to iterator state
> that's used both by the main loop and the read stream callback? If the
> iterator reaches the next relation, it returns InvalidBlockNumber and the main
> loop s
On Fri, Nov 29, 2024 at 6:20 AM Nazir Bilal Yavuz wrote:
>
> v4 is attached.
I've started looking at this version. What I don't like about this
structure is that we are forced to increment the position in the array
of BlockInfoRecords in both the callback and the main loop in
autoprewarm_database
On Sat, Mar 29, 2025 at 4:09 PM Melanie Plageman
wrote:
>
> One alternative is to loop through the array of BlockInfoRecords and
> get the start and end positions of the blocks in the arary for a
> single relation/fork combo. Then we could make the read stream and
> pass those tw
On Sat, Mar 29, 2025 at 2:25 PM Andres Freund wrote:
>
> I think I found an issue with this one - as it stands the view was viewable by
> everyone. While it doesn't provide a *lot* of insight, it still seems a bit
> too much for an unprivileged user to learn what part of a relation any other
> use
On Thu, Mar 27, 2025 at 2:46 PM Matheus Alcantara
wrote:
>
> Just my 0.2 cents. I also like the first approach even though I prefer
> the v4 version, but anyway, thanks very much for reviewing and
> committing!
Thanks for the patch!
FWIW, I strongly disliked about v4 that two separate parts of t
On Thu, Mar 27, 2025 at 4:45 AM Nazir Bilal Yavuz wrote:
>
> I liked the first approach more. We can solve the first approach's
> problems by introducing a void pointer to pass to
> read_stream_begin_relation(). We can set it to &rsdata.range for the
> SKIP_PAGES_NONE case and &rsdata for the rest
On Mon, Mar 17, 2025 at 9:47 AM Matheus Alcantara
wrote:
>
> Sorry for the delay, attached v4 with the remaining fixes.
Thanks for the patch.
I started reviewing this with the intent to commit it. But, I decided
while studying it that I want to separate the SKIP_PAGES_NONE case and
the other cas
On Mon, Mar 24, 2025 at 7:58 PM Masahiko Sawada wrote:
>
> You're right. I've studied the read stream code and figured out how to
> use it. In the attached patch, we end the read stream at the end of
> phase 1 and start a new read stream, as you suggested.
I've started looking at this patch set s
On Fri, Mar 21, 2025 at 3:23 PM Melanie Plageman
wrote:
>
> I've committed the btree and gist read stream users. I think we can
> come back to the test after feature freeze and make sure it is super
> solid.
I've now committed the spgist vacuum user as well. I'll mar
On Sun, Mar 23, 2025 at 10:00 AM Alexander Lakhin wrote:
>
> Interestingly enough, with "log_autovacuum_min_duration = 0" added to
> TEMP_CONFIG, I can't see "automatic vacuum/analyze" messages related
> to ft2/ "S 1"."T 1", so autovacuum somehow affects contents of this table
> indirectly.
>
> Wi
On Mon, Mar 24, 2025 at 12:14 PM Melanie Plageman
wrote:
>
> This is the patch I intend to commit to fix this assuming my CI passes
> and there are no objections.
And pushed in aea916fe555a3
- Melanie
On Sun, Mar 23, 2025 at 1:27 PM Melanie Plageman
wrote:
>
> Perhaps it is better I just fix it since ripping out the skip fetch
> optimization has to be backported and even though that will look very
> different on master than on backbranches, I wonder if people will look
> for a
On Sat, Mar 22, 2025 at 5:04 PM Andres Freund wrote:
>
> The problem is that sometimes recheck is performed for pending empty
> tuples. The comment about empty tuples says:
>
> /*
> * Bitmap is exhausted. Time to emit empty tuples if relevant. We emit
> * all empty tuples
On Thu, Mar 20, 2025 at 4:36 AM Masahiko Sawada wrote:
>
> When testing the multi passes of table vacuuming, I found an issue.
> With the current patch, both leader and parallel workers process stop
> the phase 1 as soon as the shared TidStore size reaches to the limit,
> and then the leader resum
On Fri, Mar 21, 2025 at 8:19 AM Andrey Borodin wrote:
>
> > On 21 Mar 2025, at 05:54, Melanie Plageman
> > wrote:
> >
> > I also think it is worth making a btree directory in src/test instead
> > of adding this to src/test/modules/test_misc. In fact it might be
On Tue, Mar 18, 2025 at 4:12 PM Andres Freund wrote:
>
> Attached is v2.10
This is a review of 0002: bufmgr: Improve stats when buffer is read
in concurrently
In the commit message, it might be worth distinguishing that
pg_stat_io and vacuum didn't double count reads, they under-counted
hits. p
On Tue, Mar 18, 2025 at 4:12 PM Andres Freund wrote:
>
> Attached is v2.10,
I noticed a few comments could be improved in 0011: bufmgr: Use AIO
in StartReadBuffers()
In WaitReadBuffers(), this comment is incomplete:
/*
-* Skip this block if someone else has already completed it
On Wed, Mar 19, 2025 at 4:54 AM Christophe Pettus wrote:
>
> Everything seems to point to the vacuum free space map operation, since it
> would have a lot of work to do in that particular situation, it happens at
> just the right place in the vacuum cycle, and its resource consumption is not
>
On Thu, Nov 28, 2024 at 10:29 AM Kirill Reshke wrote:
>
> 0001 Looks mature. Some comments:
> 1)
> >+# This ensures autovacuum do not run
> >+$node->append_conf('postgresql.conf', 'autovacuum = off');
> The other option is to set `autovacuum = off `in relation DDL. I'm not
> sure which option is b
On Tue, Mar 18, 2025 at 4:12 PM Andres Freund wrote:
> Attached is v2.10,
This is a review of 0008: bufmgr: Implement AIO read support
I'm afraid it is more of a cosmetic review than a sign-off on the
patch's correctness, but perhaps it will help future readers who may
have the same question
On Tue, Mar 18, 2025 at 11:12 AM Melanie Plageman
wrote:
>
> I'm looking at 0001 with the intent of committing it soon. Today I've
> just been studying the test with the injection points.
Now, I've reviewed 0001. I manually validated it does read combining etc.
Few com
On Tue, Mar 18, 2025 at 11:12 AM Melanie Plageman
wrote:
>
> I'm looking at 0001 with the intent of committing it soon. Today I've
> just been studying the test with the injection points.
>
> My main thought is that you should rename the injection points to
> somethi
On Mon, Mar 17, 2025 at 2:55 PM Andres Freund wrote:
>
> On 2025-03-17 14:52:02 -0400, Melanie Plageman wrote:
> > I don't feel strongly that we need to be as rigorous for
> > maintenance_io_concurrency, but I'm also not sure 160 seems reasonable
> > (which
hat a bit, but I avoided getting
into too many details.
- Melanie
From 398282293c38f9dfb4f03839a4a7962887c9c0c9 Mon Sep 17 00:00:00 2001
From: Melanie Plageman
Date: Mon, 17 Mar 2025 15:17:21 -0400
Subject: [PATCH v1] Enable IO concurrency on all systems
Previously effective_io_concurrency and maintena
On Mon, Mar 17, 2025 at 3:44 AM Jakub Wartak
wrote:
>
> dunno, I've just asked if it isn't suspicious to anyone except me else
> that e_io_c > m_io_c rather than e_io_c <= m_io_c. My understanding
> was always that to tune max IO queue depth you would do this:
> a. up to N backends (up to max_conn
On Thu, Mar 13, 2025 at 5:41 PM Melanie Plageman
wrote:
>
> Overall, I feel pretty good about merging this once Thomas merges the
> read stream patches.
This was committed in 944e81bf99db2b5b70b, 2b73a8cd33b745c, and
c3953226a07527a1e2.
I've marked it as committed in the CF app.
On Thu, Mar 13, 2025 at 5:46 AM Jakub Wartak
wrote:
>
> Cool, anything > 1 is just better. Just quick question, so now we have:
>
> #define DEFAULT_EFFECTIVE_IO_CONCURRENCY 16
> #define DEFAULT_MAINTENANCE_IO_CONCURRENCY 10
>
> Shouldn't maintenance be now also at the same value (16) instead of
>
Thanks for taking a look. I've pushed the patch to increase the
default effective_io_concurrency.
On Tue, Mar 11, 2025 at 8:07 PM Andres Freund wrote:
>
> On 2025-03-10 19:45:38 -0400, Melanie Plageman wrote:
> > From 7b35b1144bddf202fb4d56a9b783751a0945ba0e Mon Sep 17 00:00
On Mon, Mar 3, 2025 at 6:43 PM Melanie Plageman
wrote:
>
> Okay, I got cold feet today working on this. I actually think we
> probably want some kind of guc set (set like union/intersection/etc
> not set like assign a value) infrastructure for gucs that can be equal
> to any
On Tue, Mar 11, 2025 at 6:27 PM Melanie Plageman
wrote:
>
> I did more manual testing of my patches, and I think they are mostly
> ready for commit except for the IsConnectionBackend() macro (if we
> have something to change it to).
I've committed this and marked it as such in t
On Mon, Mar 10, 2025 at 5:03 PM Daniel Gustafsson wrote:
>
> > On 7 Mar 2025, at 23:08, Melanie Plageman wrote:
>
> Sorry for being late to the party. I like this functionality and would
> definitely like to see it in v18.
Thanks so much for the review!
> + /* For ba
On Tue, Mar 11, 2025 at 1:56 PM Andres Freund wrote:
>
> Hi,
>
> On 2025-03-11 11:31:18 -0400, Melanie Plageman wrote:
> > Commit messages for 0017-0020 are thin. I assume you will beef them up
> > a bit before committing.
>
> Yea. I wanted to get some feedback on wh
On Mon, Mar 10, 2025 at 2:23 PM Andres Freund wrote:
>
> - 0016 to 0020 - cleanups for temp buffers code - I just wrote these to clean
> up the code before making larger changes, needs review
This is a review of 0016-0020
Commit messages for 0017-0020 are thin. I assume you will beef them up
a
On Sat, Mar 8, 2025 at 1:42 AM Masahiko Sawada wrote:
>
>
> I've attached the updated version patches.
I've started trying to review this and realized that, while I'm
familiar with heap vacuuming code, I'm not familiar enough with the
vacuumparallel.c machinery to be of help without much addition
On Mon, Mar 10, 2025 at 3:06 PM Melanie Plageman
wrote:
>
> The takeaway is that I think 16 is a good default effectio_io_concurrency
> value.
Attached v35 0003-0005 is the same as v34 but is on top of two new
commits: 0001 increases the default effective_io_concurrency to 16 and
00
On Sun, Mar 9, 2025 at 9:24 PM John Naylor wrote:
>
> On Mon, Mar 10, 2025 at 1:46 AM Masahiko Sawada wrote:
> >
> > Commit bbf668d66fbf6 (back-patched to v17) lowered the minimum
> > maintenance_work_mem to 64kB, but it doesn't work for parallel vacuum
>
> That was done in the first place to mak
On Thu, Mar 6, 2025 at 4:38 PM Robert Haas wrote:
>
> On Thu, Mar 6, 2025 at 4:27 PM Tom Lane wrote:
>
> > My point was that we can implement such a policy in a laissez-faire
> > way: if an older BF animal isn't causing us trouble then why mess
> > with it? Once we *do* recognize that it's causi
On Fri, Mar 7, 2025 at 10:09 AM Bertrand Drouvot
wrote:
>
> Given that conn_timing.ready_for_use is only used here:
>
> + if (!reported_first_ready_for_query &&
> + (log_connections &
> LOG_CONNECTION_READY_FOR_USE) &&
> +
On Fri, Mar 7, 2025 at 4:53 PM Jacob Champion
wrote:
>
> If I add a hypothetical auth method in the future that authenticates,
> and then farms the authorization decision out to some slow-moving
> network machinery, would "authenticated" retroactively become a stage
> then? (OAuth almost does this
t;durations".
I'm a bit torn about having the tests in authentication/001_password.
On the one hand, it doesn't make sense to make a separate test file
and initialize a new postgres just to test two or three options of one
GUC. On the other hand, these tests don't fit very well in
1 - 100 of 969 matches
Mail list logo