On Sat, Apr 5, 2025 at 1:17 PM Melanie Plageman
wrote:
>
> On Fri, Apr 4, 2025 at 5:35 PM Masahiko Sawada wrote:
> >
> > > I haven't looked closely at this version but I did notice that you do
> > > not document that parallel vacuum disables eager scanning. Imagine you
> > > are a user who has se
On Fri, Apr 18, 2025 at 2:49 AM Hayato Kuroda (Fujitsu)
wrote:
>
> Dear Sawada-san,
>
> Thanks for updating the patch. I have been reviewing and below are comments
> for now.
Thank you for reviewing the patch!
>
> 01.
> Sorry if I forgot something, but is there a reason that
> parallel_vacuum_c
On Mon, Apr 7, 2025 at 5:20 PM Peter Smith wrote:
>
> Hi Sawada-san,
>
> Here are some review comments for the patch v16-0002
>
> ==
> Commit message
>
> 1.
> Missing period.
> /cleanup, ParallelVacuumState/cleanup. ParallelVacuumState/
> ~~~
>
> 2.
> Typo /paralel/parallel/
>
> ~~~
>
> 3.
> H
On Sun, Apr 6, 2025 at 10:27 PM Peter Smith wrote:
>
> Hi Sawada-san.
>
> I was revisiting this thread after a long time. I found most of my
> previous review comments from v11-0001 were not yet addressed. I can't
> tell if they are deliberately left out, or if they are accidentally
> overlooked.
Dear Sawada-san,
Thanks for updating the patch. I have been reviewing and below are comments for
now.
01.
Sorry if I forgot something, but is there a reason that
parallel_vacuum_compute_workers() is mandatory? My fresh eye felt that the
function
can check and regards zero if the API is NULL.
Hi Sawada-san,
Here are some review comments for the patch v16-0002
==
Commit message
1.
Missing period.
/cleanup, ParallelVacuumState/cleanup. ParallelVacuumState/
~~~
2.
Typo /paralel/parallel/
~~~
3.
Heap table AM disables the parallel heap vacuuming for now, but an
upcoming patch use
Hi Sawada-san.
I was revisiting this thread after a long time. I found most of my
previous review comments from v11-0001 were not yet addressed. I can't
tell if they are deliberately left out, or if they are accidentally
overlooked. Please see the details below.
On Mon, Mar 10, 2025 at 3:05 PM Pe
On Sun, Apr 6, 2025 at 1:02 AM Masahiko Sawada wrote:
>
> The eager freeze scan is the pre-existing feature but it's pretty new
> code that was pushed just a couple months ago. I didn't want to make
> the newly introduced code complex further in one major release
> especially if it's in a vacuum a
On Sat, Apr 5, 2025 at 1:32 PM Andres Freund wrote:
>
> Hi,
>
> On 2025-04-04 14:34:53 -0700, Masahiko Sawada wrote:
> > On Fri, Apr 4, 2025 at 11:05 AM Melanie Plageman
> > wrote:
> > >
> > > On Tue, Apr 1, 2025 at 5:30 PM Masahiko Sawada
> > > wrote:
> > > >
> > > >
> > > > I've attached the
Hi,
On 2025-04-04 14:34:53 -0700, Masahiko Sawada wrote:
> On Fri, Apr 4, 2025 at 11:05 AM Melanie Plageman
> wrote:
> >
> > On Tue, Apr 1, 2025 at 5:30 PM Masahiko Sawada
> > wrote:
> > >
> > >
> > > I've attached the new version patch. There are no major changes; I
> > > fixed some typos, imp
On Fri, Apr 4, 2025 at 5:35 PM Masahiko Sawada wrote:
>
> > I haven't looked closely at this version but I did notice that you do
> > not document that parallel vacuum disables eager scanning. Imagine you
> > are a user who has set the eager freeze related table storage option
> > (vacuum_max_eage
On Sun, Mar 23, 2025 at 4:46 AM Masahiko Sawada wrote:
>
> If we use ParallelBlockTableScanDesc with streaming read like the
> patch did, we would also need to somehow rewind the number of blocks
> allocated to workers. The problem I had with such usage was that a
> parallel vacuum worker allocate
On Fri, Apr 4, 2025 at 11:05 AM Melanie Plageman
wrote:
>
> On Tue, Apr 1, 2025 at 5:30 PM Masahiko Sawada wrote:
> >
> >
> > I've attached the new version patch. There are no major changes; I
> > fixed some typos, improved the comment, and removed duplicated codes.
> > Also, I've updated the com
On Tue, Apr 1, 2025 at 5:30 PM Masahiko Sawada wrote:
>
>
> I've attached the new version patch. There are no major changes; I
> fixed some typos, improved the comment, and removed duplicated codes.
> Also, I've updated the commit messages.
I haven't looked closely at this version but I did notic
On Wed, Mar 26, 2025 at 1:00 PM Melanie Plageman
wrote:
>
> On Mon, Mar 24, 2025 at 7:58 PM Masahiko Sawada wrote:
> >
> > You're right. I've studied the read stream code and figured out how to
> > use it. In the attached patch, we end the read stream at the end of
> > phase 1 and start a new rea
On Mon, Mar 24, 2025 at 7:58 PM Masahiko Sawada wrote:
>
> You're right. I've studied the read stream code and figured out how to
> use it. In the attached patch, we end the read stream at the end of
> phase 1 and start a new read stream, as you suggested.
I've started looking at this patch set s
On Sun, Mar 23, 2025 at 10:13 AM Andres Freund wrote:
>
> Hi,
>
> On 2025-03-23 01:45:35 -0700, Masahiko Sawada wrote:
> > Another idea is that parallel workers don't exit phase 1 until it
> > consumes all pinned buffers in the queue, even if the memory usage of
> > TidStore exceeds the limit.
>
>
Hi,
On 2025-03-23 01:45:35 -0700, Masahiko Sawada wrote:
> Another idea is that parallel workers don't exit phase 1 until it
> consumes all pinned buffers in the queue, even if the memory usage of
> TidStore exceeds the limit.
Yes, that seems a quite reasonable approach to me.
> It would need t
On Sat, Mar 22, 2025 at 7:16 AM Melanie Plageman
wrote:
>
> On Thu, Mar 20, 2025 at 4:36 AM Masahiko Sawada wrote:
> >
> > When testing the multi passes of table vacuuming, I found an issue.
> > With the current patch, both leader and parallel workers process stop
> > the phase 1 as soon as the s
Hi,
On 2025-03-20 01:35:42 -0700, Masahiko Sawada wrote:
> One plausible solution would be that we don't use ReadStream in
> parallel heap vacuum cases but directly use
> table_block_parallelscan_xxx() instead. It works but we end up having
> two different scan methods for parallel and non-paralle
On Thu, Mar 20, 2025 at 4:36 AM Masahiko Sawada wrote:
>
> When testing the multi passes of table vacuuming, I found an issue.
> With the current patch, both leader and parallel workers process stop
> the phase 1 as soon as the shared TidStore size reaches to the limit,
> and then the leader resum
On Sun, Mar 9, 2025 at 11:12 PM Amit Kapila wrote:
>
> On Wed, Mar 5, 2025 at 6:25 AM Masahiko Sawada wrote:
> >
> > On Mon, Mar 3, 2025 at 3:24 PM Masahiko Sawada
> > wrote:
> > >
> > >
> > > Another performance regression I can see in the results is that heap
> > > vacuum phase (phase III) go
On Wed, Mar 12, 2025 at 3:05 AM Amit Kapila wrote:
>
> On Wed, Mar 12, 2025 at 3:12 AM Masahiko Sawada wrote:
> >
> > On Tue, Mar 11, 2025 at 6:00 AM Amit Kapila wrote:
> > >
> > > On Mon, Mar 10, 2025 at 11:57 PM Masahiko Sawada
> > > wrote:
> > > >
> > > > On Sun, Mar 9, 2025 at 11:12 PM Ami
On Wed, Mar 12, 2025 at 11:24 AM Dilip Kumar wrote:
>
> On Wed, Mar 12, 2025 at 3:17 AM Masahiko Sawada wrote:
> >
> > On Tue, Mar 11, 2025 at 5:51 AM Amit Kapila wrote:
> > >
>
> Some thoughts/questions on the idea
>
> I notice that we are always considering block-level parallelism for
> heaps
On Wed, Mar 12, 2025 at 3:40 PM Amit Kapila wrote:
>
> On Wed, Mar 12, 2025 at 11:24 AM Dilip Kumar wrote:
> >
> > On Wed, Mar 12, 2025 at 3:17 AM Masahiko Sawada
> > wrote:
> > >
> > > On Tue, Mar 11, 2025 at 5:51 AM Amit Kapila
> > > wrote:
> > > >
> >
> > Some thoughts/questions on the ide
On Wed, Mar 12, 2025 at 3:12 AM Masahiko Sawada wrote:
>
> On Tue, Mar 11, 2025 at 6:00 AM Amit Kapila wrote:
> >
> > On Mon, Mar 10, 2025 at 11:57 PM Masahiko Sawada
> > wrote:
> > >
> > > On Sun, Mar 9, 2025 at 11:12 PM Amit Kapila
> > > wrote:
> > > >
> > > >
> > > > > However, in the heap
On Wed, Mar 12, 2025 at 3:17 AM Masahiko Sawada wrote:
>
> On Tue, Mar 11, 2025 at 5:51 AM Amit Kapila wrote:
> >
Some thoughts/questions on the idea
I notice that we are always considering block-level parallelism for
heaps and object-level parallelism for indexes. I'm wondering, when
multiple
On Wed, Mar 12, 2025 at 11:24 AM Dilip Kumar wrote:
>
> On Wed, Mar 12, 2025 at 3:17 AM Masahiko Sawada wrote:
> >
Some more specific comments on the patch set
-- v11-0001
1. This introduces functions like parallel_vacuum_estimate(),
parallel_vacuum_initialize(), etc., but these functions have
On Tue, Mar 11, 2025 at 5:51 AM Amit Kapila wrote:
>
> On Tue, Mar 11, 2025 at 5:00 AM Masahiko Sawada wrote:
> >
> > On Sun, Mar 9, 2025 at 11:28 PM Amit Kapila wrote:
> > >
> > >
> > > Does phase 3 also use parallelism? If so, can we try to divide the
> > > ring buffers among workers or at lea
On Tue, Mar 11, 2025 at 6:00 AM Amit Kapila wrote:
>
> On Mon, Mar 10, 2025 at 11:57 PM Masahiko Sawada
> wrote:
> >
> > On Sun, Mar 9, 2025 at 11:12 PM Amit Kapila wrote:
> > >
> > >
> > > > However, in the heap vacuum phase, the leader process needed
> > > > to process all blocks, resulting i
On Mon, Mar 10, 2025 at 5:03 PM Melanie Plageman
wrote:
>
> On Sat, Mar 8, 2025 at 1:42 AM Masahiko Sawada wrote:
> >
> >
> > I've attached the updated version patches.
>
> I've started trying to review this and realized that, while I'm
> familiar with heap vacuuming code, I'm not familiar enough
On Mon, Mar 10, 2025 at 11:57 PM Masahiko Sawada wrote:
>
> On Sun, Mar 9, 2025 at 11:12 PM Amit Kapila wrote:
> >
> >
> > > However, in the heap vacuum phase, the leader process needed
> > > to process all blocks, resulting in soft page faults while creating
> > > Page Table Entries (PTEs). With
On Tue, Mar 11, 2025 at 5:00 AM Masahiko Sawada wrote:
>
> On Sun, Mar 9, 2025 at 11:28 PM Amit Kapila wrote:
> >
> >
> > Does phase 3 also use parallelism? If so, can we try to divide the
> > ring buffers among workers or at least try vacuum with an increased
> > number of ring buffers. This wou
On Sat, Mar 8, 2025 at 1:42 AM Masahiko Sawada wrote:
>
>
> I've attached the updated version patches.
I've started trying to review this and realized that, while I'm
familiar with heap vacuuming code, I'm not familiar enough with the
vacuumparallel.c machinery to be of help without much addition
On Sun, Mar 9, 2025 at 11:28 PM Amit Kapila wrote:
>
> On Fri, Mar 7, 2025 at 11:06 PM Masahiko Sawada wrote:
> >
> > Discussing with Amit offlist, I've run another benchmark test where no
> > data is loaded on the shared buffer. In the previous test, I loaded
> > all table blocks before running
Hi Sawada-San.
Some review comments for patch v11-0003.
==
src/backend/access/heap/vacuumlazy.c
1.
+typedef struct LVScanData
+{
+ BlockNumber rel_pages; /* total number of pages */
+
+ BlockNumber scanned_pages; /* # pages examined (not skipped via VM) */
+
+ /*
+ * Count of all-visible blo
On Fri, Mar 7, 2025 at 11:06 PM Masahiko Sawada wrote:
>
> Discussing with Amit offlist, I've run another benchmark test where no
> data is loaded on the shared buffer. In the previous test, I loaded
> all table blocks before running vacuum, so it was the best case. The
> attached test results sho
On Wed, Mar 5, 2025 at 6:25 AM Masahiko Sawada wrote:
>
> On Mon, Mar 3, 2025 at 3:24 PM Masahiko Sawada wrote:
> >
> >
> > Another performance regression I can see in the results is that heap
> > vacuum phase (phase III) got slower with the patch. It's weired to me
> > since I don't touch the co
Hi Sawada-San,
Here are some review comments for patch v11-0001.
==
src/backend/access/heap/vacuumlazy.c
1.
+/*
+ * Compute the number of workers for parallel heap vacuum.
+ *
+ * Return 0 to disable parallel vacuum so far.
+ */
+int
+heap_parallel_vacuum_compute_workers(Relation rel, int nw
Hi Sawada-San.
Here are some review comments for patch v11-0002
==
Commit message.
1.
Heap table AM disables the parallel heap vacuuming for now, but an
upcoming patch uses it.
This function implementation was moved into patch 0001, so probably
this part of the commit message comment also b
On Thu, Mar 6, 2025 at 5:33 PM Peter Smith wrote:
>
> Some minor review comments for patch v10-0001.
>
> ==
> src/include/access/tableam.h
>
> 1.
> struct IndexInfo;
> +struct ParallelVacuumState;
> +struct ParallelContext;
> +struct ParallelWorkerContext;
> struct SampleScanState;
>
> Use a
On Mon, Mar 3, 2025 at 3:24 PM Masahiko Sawada wrote:
>
> On Mon, Mar 3, 2025 at 1:28 AM Masahiko Sawada wrote:
> >
> > On Tue, Feb 25, 2025 at 4:49 PM Masahiko Sawada
> > wrote:
> > >
> > > On Tue, Feb 25, 2025 at 2:44 PM Melanie Plageman
> > > wrote:
> > > >
> > > > On Tue, Feb 25, 2025 at 5
Hi Sawada-San. Here are some review comments for patch v10-0002.
==
src/backend/access/heap/heapam_handler.c
1.
.scan_bitmap_next_block = heapam_scan_bitmap_next_block,
.scan_bitmap_next_tuple = heapam_scan_bitmap_next_tuple,
.scan_sample_next_block = heapam_scan_sample_next_block,
- .s
Hi Sawada-San.
FYI. I am observing the following test behaviour:
I apply patch v10-0001, do a clean rebuild and run 'make check', and
all tests are OK.
Then, after I apply just patch v10-0002 on top of 0001, do a clean
rebuild and run 'make check' there are many test fails.
==
Kind Regards,
Some minor review comments for patch v10-0001.
==
src/include/access/tableam.h
1.
struct IndexInfo;
+struct ParallelVacuumState;
+struct ParallelContext;
+struct ParallelWorkerContext;
struct SampleScanState;
Use alphabetical order for consistency with existing code.
~~~
2.
+ /*
+ * Esti
On Mon, Mar 3, 2025 at 3:24 PM Masahiko Sawada wrote:
>
>
> Another performance regression I can see in the results is that heap
> vacuum phase (phase III) got slower with the patch. It's weired to me
> since I don't touch the code of heap vacuum phase. I'm still
> investigating the cause.
I have
On Mon, Mar 3, 2025 at 1:28 AM Masahiko Sawada wrote:
>
> On Tue, Feb 25, 2025 at 4:49 PM Masahiko Sawada wrote:
> >
> > On Tue, Feb 25, 2025 at 2:44 PM Melanie Plageman
> > wrote:
> > >
> > > On Tue, Feb 25, 2025 at 5:14 PM Masahiko Sawada
> > > wrote:
> > > >
> > > > Given that we have only
On Tue, Feb 25, 2025 at 2:44 PM Melanie Plageman
wrote:
>
> On Tue, Feb 25, 2025 at 5:14 PM Masahiko Sawada wrote:
> >
> > Given that we have only about one month until the feature freeze, I
> > find that it's realistic to introduce either one parallelism for PG18
> > and at least we might want t
On Tue, Feb 25, 2025 at 5:14 PM Masahiko Sawada wrote:
>
> Given that we have only about one month until the feature freeze, I
> find that it's realistic to introduce either one parallelism for PG18
> and at least we might want to implement the one first that is more
> beneficial and helpful for u
On Tue, Feb 25, 2025 at 9:59 AM Melanie Plageman
wrote:
>
> On Mon, Feb 24, 2025 at 8:15 PM Masahiko Sawada wrote:
> >
> > What I can see from these results was that we might not benefit much
> > from parallelizing phase III, unfortunately. Although in the best case
> > the phase III got about 2x
On Mon, Feb 24, 2025 at 8:15 PM Masahiko Sawada wrote:
>
> What I can see from these results was that we might not benefit much
> from parallelizing phase III, unfortunately. Although in the best case
> the phase III got about 2x speedup, as for the total duration it's
> about only 10% speedup. My
On Sun, Feb 23, 2025 at 8:51 PM John Naylor wrote:
>
> On Tue, Feb 18, 2025 at 1:11 AM Masahiko Sawada wrote:
> >
> > Right. I'm inclined to support only the second heap pass as the first
> > step. If we support parallelism only for the second pass, it cannot
> > help speed up freezing the entire
On Mon, Feb 17, 2025 at 12:11 PM Masahiko Sawada
wrote:
> > If the idea is to never allow parallelism in vacuum, then I think
> > disabling eager scanning during manual parallel vacuum seems
> > reasonable. People could use vacuum freeze if they want more freezing.
>
> IIUC the purpose of paralle
On Tue, Feb 18, 2025 at 1:11 AM Masahiko Sawada wrote:
>
> Right. I'm inclined to support only the second heap pass as the first
> step. If we support parallelism only for the second pass, it cannot
> help speed up freezing the entire table in emergency situations, but
> it would be beneficial for
On Tue, Feb 18, 2025 at 4:43 PM Melanie Plageman
wrote:
>
> On Mon, Feb 17, 2025 at 1:11 PM Masahiko Sawada wrote:
> >
> > On Fri, Feb 14, 2025 at 2:21 PM Melanie Plageman
> > wrote:
> > >
> > > Since the failure rate is defined as a percent, couldn't we just have
> > > parallel workers set eage
On Mon, Feb 17, 2025 at 1:11 PM Masahiko Sawada wrote:
>
> On Fri, Feb 14, 2025 at 2:21 PM Melanie Plageman
> wrote:
> >
> > Since the failure rate is defined as a percent, couldn't we just have
> > parallel workers set eager_scan_remaining_fails when they get their
> > chunk assignment (as a per
On Fri, Feb 14, 2025 at 2:21 PM Melanie Plageman
wrote:
>
> On Wed, Feb 12, 2025 at 5:37 PM Masahiko Sawada wrote:
> >
> > Since we introduced the eagar vacuum scan (052026c9b9), I need to
> > update the parallel heap vacuum patch. After thinking of how to
> > integrate these two features, I find
On Wed, Feb 12, 2025 at 5:37 PM Masahiko Sawada wrote:
>
> Since we introduced the eagar vacuum scan (052026c9b9), I need to
> update the parallel heap vacuum patch. After thinking of how to
> integrate these two features, I find some complexities. The region
> size used by eager vacuum scan and t
On Thu, Feb 13, 2025 at 8:16 PM John Naylor wrote:
>
> On Thu, Feb 13, 2025 at 5:37 AM Masahiko Sawada wrote:
> >
> > During eager vacuum scan, we reset the eager_scan_remaining_fails
> > counter when we start to scan the new region. So if we want to make
> > parallel heap vacuum behaves exactly
On Thu, Feb 13, 2025 at 5:37 AM Masahiko Sawada wrote:
>
> During eager vacuum scan, we reset the eager_scan_remaining_fails
> counter when we start to scan the new region. So if we want to make
> parallel heap vacuum behaves exactly the same way as the
> single-progress vacuum in terms of the eag
On Tue, Jan 21, 2025 at 11:05 AM Masahiko Sawada wrote:
>
> On Sun, Jan 19, 2025 at 7:50 AM Tomas Vondra wrote:
> >
> > Hi,
> >
> > Thanks for the new patches. I've repeated my benchmarking on v8, and I
> > agree this looks fine - the speedups are reasonable and match what I'd
> > expect on this
On Sun, Jan 19, 2025 at 7:50 AM Tomas Vondra wrote:
>
> Hi,
>
> Thanks for the new patches. I've repeated my benchmarking on v8, and I
> agree this looks fine - the speedups are reasonable and match what I'd
> expect on this hardware. I don't see any suspicious results like with
> the earlier patc
On Fri, Jan 17, 2025 at 10:43 PM Masahiko Sawada
wrote:
> On Fri, Jan 17, 2025 at 1:43 AM Dilip Kumar wrote:
> >
> > On Fri, Jan 17, 2025 at 6:37 AM Masahiko Sawada
> wrote:
> >>
> >> On Sun, Jan 12, 2025 at 1:34 AM Masahiko Sawada
> wrote:
> >> >
> >
> >
> > IIRC, there was one of the blocker
On Fri, Jan 17, 2025 at 1:43 AM Dilip Kumar wrote:
>
> On Fri, Jan 17, 2025 at 6:37 AM Masahiko Sawada wrote:
>>
>> On Sun, Jan 12, 2025 at 1:34 AM Masahiko Sawada
>> wrote:
>> >
>
>
> IIRC, there was one of the blocker for implementing parallel heap vacuum was
> group locking, have we already
On Fri, Jan 17, 2025 at 6:37 AM Masahiko Sawada
wrote:
> On Sun, Jan 12, 2025 at 1:34 AM Masahiko Sawada
> wrote:
> >
>
IIRC, there was one of the blocker for implementing parallel heap vacuum
was group locking, have we already resolved that issue or its being
included in this patch set?
--
R
On 12/19/24 23:05, Masahiko Sawada wrote:
> On Sat, Dec 14, 2024 at 1:24 PM Tomas Vondra wrote:
>>
>> On 12/13/24 00:04, Tomas Vondra wrote:
>>> ...
>>>
>>> The main difference is here:
>>>
>>>
>>> master / no parallel workers:
>>>
>>> pages: 0 removed, 221239 remain, 221239 scanned (100.00%
Dear Sawada-san,
Thanks for updating the patch. ISTM that 0001 and 0002 can be applied
independently.
Therefore I can firstly post some comments only for them.
Comments for 0001:
```
+/* New estimated total # of tuples and total # of live tuples */
```
There is a unnecessary blank.
```
+
On Sat, Dec 14, 2024 at 1:24 PM Tomas Vondra wrote:
>
> On 12/13/24 00:04, Tomas Vondra wrote:
> > ...
> >
> > The main difference is here:
> >
> >
> > master / no parallel workers:
> >
> > pages: 0 removed, 221239 remain, 221239 scanned (100.00% of total)
> >
> > 1 parallel worker:
> >
> > pa
On 12/13/24 00:04, Tomas Vondra wrote:
> ...
>
> The main difference is here:
>
>
> master / no parallel workers:
>
> pages: 0 removed, 221239 remain, 221239 scanned (100.00% of total)
>
> 1 parallel worker:
>
> pages: 0 removed, 221239 remain, 10001 scanned (4.52% of total)
>
>
> Clearl
On 12/13/24 00:04, Tomas Vondra wrote:
>
> ...
> Attached are results.csv with raw data, and a PDF showing the difference
> between master and patched build with varying number of workers. The
> columns on the right show timing relative to master (with no parallel
> workers). Green means "faster"
On 12/9/24 19:47, Tomas Vondra wrote:
> Hi,
>
> Thanks for working on this. I took a quick look at this today, to do
> some basic review. I plan to do a bunch of testing, but that's mostly to
> get a better idea of what kind of improvements to expect - the initial
> results look quite nice and sen
On Mon, Dec 9, 2024 at 2:11 PM Tomas Vondra wrote:
>
> Hi,
>
> Thanks for working on this. I took a quick look at this today, to do
> some basic review. I plan to do a bunch of testing, but that's mostly to
> get a better idea of what kind of improvements to expect - the initial
> results look qui
Dear Tomas,
> 1) I really like the idea of introducing "compute_workers" callback to
> the heap AM interface. I faced a similar issue with calculating workers
> for index builds, because right now plan_create_index_workers is doing
> that the logic works for btree, but really not brin etc. It didn
Hi,
Thanks for working on this. I took a quick look at this today, to do
some basic review. I plan to do a bunch of testing, but that's mostly to
get a better idea of what kind of improvements to expect - the initial
results look quite nice and sensible.
A couple basic comments:
1) I really like
Hi Sawada-San.
FWIW, here are the remainder of my review comments for the patch v4-0001
==
src/backend/access/heap/vacuumlazy.c
lazy_scan_heap:
2.1.
+ /*
+ * Do the actual work. If parallel heap vacuum is active, we scan and
+ * vacuum heap with parallel workers.
+ */
/with/using/
~~~
2
Hi Sawada-San,
I started to look at patch v4-0001 in this thread.
It is quite a big patch so this is a WIP, and these below are just the
comments I have so far.
==
src/backend/access/heap/vacuumlazy.c
1.1.
+/*
+ * Relation statistics collected during heap scanning and need to be
shared amon
Hi Sawada-San,
FYI, the patch 0001 fails to build stand-alone
vacuumlazy.c: In function ‘parallel_heap_vacuum_gather_scan_stats’:
vacuumlazy.c:3739:21: error: ‘LVRelScanStats’ has no member named
‘vacuumed_pages’
vacrel->scan_stats->vacuumed_pages += ss->vacuumed_pages;
^
Dear Swada-san,
>
> BTW while updating the patch, I found that we might want to launch
> different numbers of workers for scanning heap and vacuuming heap. The
> number of parallel workers is determined based on the number of blocks
> in the table. However, even if this number is high, it could h
On Tue, Nov 12, 2024 at 3:21 AM vignesh C wrote:
>
> On Wed, 30 Oct 2024 at 22:48, Masahiko Sawada wrote:
> >
> >
> > I've attached new version patches that fixes failures reported by
> > cfbot. I hope these changes make cfbot happy.
> >
>
> I just started reviewing the patch and found the follow
On Wed, Nov 13, 2024 at 3:10 AM Hayato Kuroda (Fujitsu)
wrote:
>
> Dear Sawada-san,
>
> > TidStoreBeginIterateShared() is designed for multiple parallel workers
> > to iterate a shared TidStore. During an iteration, parallel workers
> > share the iteration state and iterate the underlying radix tr
Dear Sawada-san,
> TidStoreBeginIterateShared() is designed for multiple parallel workers
> to iterate a shared TidStore. During an iteration, parallel workers
> share the iteration state and iterate the underlying radix tree while
> taking appropriate locks. Therefore, it's available only for a s
On Wed, 30 Oct 2024 at 22:48, Masahiko Sawada wrote:
>
>
> I've attached new version patches that fixes failures reported by
> cfbot. I hope these changes make cfbot happy.
>
I just started reviewing the patch and found the following comments
while going through the patch:
1) I felt we should add
On Mon, Nov 11, 2024 at 5:08 AM Hayato Kuroda (Fujitsu)
wrote:
>
> Dear Sawda-san,
>
> >
> > I've attached new version patches that fixes failures reported by
> > cfbot. I hope these changes make cfbot happy.
>
> Thanks for updating the patch and sorry for delaying the reply. I confirmed
> cfbot
Dear Sawda-san,
>
> I've attached new version patches that fixes failures reported by
> cfbot. I hope these changes make cfbot happy.
Thanks for updating the patch and sorry for delaying the reply. I confirmed
cfbot
for Linux/Windows said ok.
I'm still learning the feature so I can post only on
Sorry for the very late reply.
On Tue, Jul 30, 2024 at 8:54 PM Hayato Kuroda (Fujitsu)
wrote:
>
> Dear Sawada-san,
>
> > Thank you for testing!
>
> I tried to profile the vacuuming with the larger case (40 workers for the 20G
> table)
> and attached FlameGraph showed the result. IIUC, I cannot f
On Thu, Jul 25, 2024 at 2:58 AM Hayato Kuroda (Fujitsu)
wrote:
>
> Dear Sawada-san,
>
> > Thank you for the test!
> >
> > I could reproduce this issue and it's a bug; it skipped even
> > non-all-visible pages. I've attached the new version patch.
> >
> > BTW since we compute the number of parallel
Dear Sawada-san,
> Thank you for the test!
>
> I could reproduce this issue and it's a bug; it skipped even
> non-all-visible pages. I've attached the new version patch.
>
> BTW since we compute the number of parallel workers for the heap scan
> based on the table size, it's possible that we lau
On Fri, Jun 28, 2024 at 9:06 PM Amit Kapila wrote:
>
> On Fri, Jun 28, 2024 at 9:44 AM Masahiko Sawada wrote:
> >
> > # Benchmark results
> >
> > * Test-1: parallel heap scan on the table without indexes
> >
> > I created 20GB table, made garbage on the table, and run vacuum while
> > changing pa
On Fri, Jul 5, 2024 at 6:51 PM Hayato Kuroda (Fujitsu)
wrote:
>
> Dear Sawada-san,
>
> > The parallel vacuum we have today supports only for index vacuuming.
> > Therefore, while multiple workers can work on different indexes in
> > parallel, the heap table is always processed by the single proces
Dear Sawada-san,
> The parallel vacuum we have today supports only for index vacuuming.
> Therefore, while multiple workers can work on different indexes in
> parallel, the heap table is always processed by the single process.
> I'd like to propose $subject, which enables us to have multiple
> wor
On Fri, Jun 28, 2024 at 9:44 AM Masahiko Sawada wrote:
>
> # Benchmark results
>
> * Test-1: parallel heap scan on the table without indexes
>
> I created 20GB table, made garbage on the table, and run vacuum while
> changing parallel degree:
>
> create unlogged table test (a int) with (autovacuum
91 matches
Mail list logo