On Sun, Jan 19, 2025 at 7:50 AM Tomas Vondra wrote:
>
> Hi,
>
> Thanks for the new patches. I've repeated my benchmarking on v8, and I
> agree this looks fine - the speedups are reasonable and match what I'd
> expect on this hardware. I don't see any suspicious results like with
> the earlier patc
On Fri, Jan 17, 2025 at 10:43 PM Masahiko Sawada
wrote:
> On Fri, Jan 17, 2025 at 1:43 AM Dilip Kumar wrote:
> >
> > On Fri, Jan 17, 2025 at 6:37 AM Masahiko Sawada
> wrote:
> >>
> >> On Sun, Jan 12, 2025 at 1:34 AM Masahiko Sawada
> wrote:
> >> >
> >
> >
> > IIRC, there was one of the blocker
On Fri, Jan 17, 2025 at 1:43 AM Dilip Kumar wrote:
>
> On Fri, Jan 17, 2025 at 6:37 AM Masahiko Sawada wrote:
>>
>> On Sun, Jan 12, 2025 at 1:34 AM Masahiko Sawada
>> wrote:
>> >
>
>
> IIRC, there was one of the blocker for implementing parallel heap vacuum was
> group locking, have we already
On Fri, Jan 17, 2025 at 6:37 AM Masahiko Sawada
wrote:
> On Sun, Jan 12, 2025 at 1:34 AM Masahiko Sawada
> wrote:
> >
>
IIRC, there was one of the blocker for implementing parallel heap vacuum
was group locking, have we already resolved that issue or its being
included in this patch set?
--
R
On 12/19/24 23:05, Masahiko Sawada wrote:
> On Sat, Dec 14, 2024 at 1:24 PM Tomas Vondra wrote:
>>
>> On 12/13/24 00:04, Tomas Vondra wrote:
>>> ...
>>>
>>> The main difference is here:
>>>
>>>
>>> master / no parallel workers:
>>>
>>> pages: 0 removed, 221239 remain, 221239 scanned (100.00%
Dear Sawada-san,
Thanks for updating the patch. ISTM that 0001 and 0002 can be applied
independently.
Therefore I can firstly post some comments only for them.
Comments for 0001:
```
+/* New estimated total # of tuples and total # of live tuples */
```
There is a unnecessary blank.
```
+
On Sat, Dec 14, 2024 at 1:24 PM Tomas Vondra wrote:
>
> On 12/13/24 00:04, Tomas Vondra wrote:
> > ...
> >
> > The main difference is here:
> >
> >
> > master / no parallel workers:
> >
> > pages: 0 removed, 221239 remain, 221239 scanned (100.00% of total)
> >
> > 1 parallel worker:
> >
> > pa
On 12/13/24 00:04, Tomas Vondra wrote:
> ...
>
> The main difference is here:
>
>
> master / no parallel workers:
>
> pages: 0 removed, 221239 remain, 221239 scanned (100.00% of total)
>
> 1 parallel worker:
>
> pages: 0 removed, 221239 remain, 10001 scanned (4.52% of total)
>
>
> Clearl
On 12/13/24 00:04, Tomas Vondra wrote:
>
> ...
> Attached are results.csv with raw data, and a PDF showing the difference
> between master and patched build with varying number of workers. The
> columns on the right show timing relative to master (with no parallel
> workers). Green means "faster"
On 12/9/24 19:47, Tomas Vondra wrote:
> Hi,
>
> Thanks for working on this. I took a quick look at this today, to do
> some basic review. I plan to do a bunch of testing, but that's mostly to
> get a better idea of what kind of improvements to expect - the initial
> results look quite nice and sen
On Mon, Dec 9, 2024 at 2:11 PM Tomas Vondra wrote:
>
> Hi,
>
> Thanks for working on this. I took a quick look at this today, to do
> some basic review. I plan to do a bunch of testing, but that's mostly to
> get a better idea of what kind of improvements to expect - the initial
> results look qui
Dear Tomas,
> 1) I really like the idea of introducing "compute_workers" callback to
> the heap AM interface. I faced a similar issue with calculating workers
> for index builds, because right now plan_create_index_workers is doing
> that the logic works for btree, but really not brin etc. It didn
Hi,
Thanks for working on this. I took a quick look at this today, to do
some basic review. I plan to do a bunch of testing, but that's mostly to
get a better idea of what kind of improvements to expect - the initial
results look quite nice and sensible.
A couple basic comments:
1) I really like
Hi Sawada-San.
FWIW, here are the remainder of my review comments for the patch v4-0001
==
src/backend/access/heap/vacuumlazy.c
lazy_scan_heap:
2.1.
+ /*
+ * Do the actual work. If parallel heap vacuum is active, we scan and
+ * vacuum heap with parallel workers.
+ */
/with/using/
~~~
2
Hi Sawada-San,
I started to look at patch v4-0001 in this thread.
It is quite a big patch so this is a WIP, and these below are just the
comments I have so far.
==
src/backend/access/heap/vacuumlazy.c
1.1.
+/*
+ * Relation statistics collected during heap scanning and need to be
shared amon
Hi Sawada-San,
FYI, the patch 0001 fails to build stand-alone
vacuumlazy.c: In function ‘parallel_heap_vacuum_gather_scan_stats’:
vacuumlazy.c:3739:21: error: ‘LVRelScanStats’ has no member named
‘vacuumed_pages’
vacrel->scan_stats->vacuumed_pages += ss->vacuumed_pages;
^
Dear Swada-san,
>
> BTW while updating the patch, I found that we might want to launch
> different numbers of workers for scanning heap and vacuuming heap. The
> number of parallel workers is determined based on the number of blocks
> in the table. However, even if this number is high, it could h
On Tue, Nov 12, 2024 at 3:21 AM vignesh C wrote:
>
> On Wed, 30 Oct 2024 at 22:48, Masahiko Sawada wrote:
> >
> >
> > I've attached new version patches that fixes failures reported by
> > cfbot. I hope these changes make cfbot happy.
> >
>
> I just started reviewing the patch and found the follow
On Wed, Nov 13, 2024 at 3:10 AM Hayato Kuroda (Fujitsu)
wrote:
>
> Dear Sawada-san,
>
> > TidStoreBeginIterateShared() is designed for multiple parallel workers
> > to iterate a shared TidStore. During an iteration, parallel workers
> > share the iteration state and iterate the underlying radix tr
Dear Sawada-san,
> TidStoreBeginIterateShared() is designed for multiple parallel workers
> to iterate a shared TidStore. During an iteration, parallel workers
> share the iteration state and iterate the underlying radix tree while
> taking appropriate locks. Therefore, it's available only for a s
On Wed, 30 Oct 2024 at 22:48, Masahiko Sawada wrote:
>
>
> I've attached new version patches that fixes failures reported by
> cfbot. I hope these changes make cfbot happy.
>
I just started reviewing the patch and found the following comments
while going through the patch:
1) I felt we should add
On Mon, Nov 11, 2024 at 5:08 AM Hayato Kuroda (Fujitsu)
wrote:
>
> Dear Sawda-san,
>
> >
> > I've attached new version patches that fixes failures reported by
> > cfbot. I hope these changes make cfbot happy.
>
> Thanks for updating the patch and sorry for delaying the reply. I confirmed
> cfbot
Dear Sawda-san,
>
> I've attached new version patches that fixes failures reported by
> cfbot. I hope these changes make cfbot happy.
Thanks for updating the patch and sorry for delaying the reply. I confirmed
cfbot
for Linux/Windows said ok.
I'm still learning the feature so I can post only on
Sorry for the very late reply.
On Tue, Jul 30, 2024 at 8:54 PM Hayato Kuroda (Fujitsu)
wrote:
>
> Dear Sawada-san,
>
> > Thank you for testing!
>
> I tried to profile the vacuuming with the larger case (40 workers for the 20G
> table)
> and attached FlameGraph showed the result. IIUC, I cannot f
On Thu, Jul 25, 2024 at 2:58 AM Hayato Kuroda (Fujitsu)
wrote:
>
> Dear Sawada-san,
>
> > Thank you for the test!
> >
> > I could reproduce this issue and it's a bug; it skipped even
> > non-all-visible pages. I've attached the new version patch.
> >
> > BTW since we compute the number of parallel
Dear Sawada-san,
> Thank you for the test!
>
> I could reproduce this issue and it's a bug; it skipped even
> non-all-visible pages. I've attached the new version patch.
>
> BTW since we compute the number of parallel workers for the heap scan
> based on the table size, it's possible that we lau
On Fri, Jun 28, 2024 at 9:06 PM Amit Kapila wrote:
>
> On Fri, Jun 28, 2024 at 9:44 AM Masahiko Sawada wrote:
> >
> > # Benchmark results
> >
> > * Test-1: parallel heap scan on the table without indexes
> >
> > I created 20GB table, made garbage on the table, and run vacuum while
> > changing pa
On Fri, Jul 5, 2024 at 6:51 PM Hayato Kuroda (Fujitsu)
wrote:
>
> Dear Sawada-san,
>
> > The parallel vacuum we have today supports only for index vacuuming.
> > Therefore, while multiple workers can work on different indexes in
> > parallel, the heap table is always processed by the single proces
Dear Sawada-san,
> The parallel vacuum we have today supports only for index vacuuming.
> Therefore, while multiple workers can work on different indexes in
> parallel, the heap table is always processed by the single process.
> I'd like to propose $subject, which enables us to have multiple
> wor
On Fri, Jun 28, 2024 at 9:44 AM Masahiko Sawada wrote:
>
> # Benchmark results
>
> * Test-1: parallel heap scan on the table without indexes
>
> I created 20GB table, made garbage on the table, and run vacuum while
> changing parallel degree:
>
> create unlogged table test (a int) with (autovacuum
30 matches
Mail list logo