Dear Sawada-san,

> Thank you for the test!
> 
> I could reproduce this issue and it's a bug; it skipped even
> non-all-visible pages. I've attached the new version patch.
> 
> BTW since we compute the number of parallel workers for the heap scan
> based on the table size, it's possible that we launch multiple workers
> even if most blocks are all-visible. It seems to be better if we
> calculate it based on (relpages - relallvisible).

Thanks for updating the patch. I applied and confirmed all pages are scanned.
I used almost the same script (just changed max_parallel_maintenance_workers)
and got below result. I think the tendency was the same as yours.

```
parallel 0: 61114.369 ms
parallel 1: 34870.316 ms
parallel 2: 23598.115 ms
parallel 3: 17732.363 ms
parallel 4: 15203.271 ms
parallel 5: 13836.025 ms
```

I started to read your codes but takes much time because I've never seen 
before...
Below part contains initial comments.

1.
This patch cannot be built when debug mode is enabled. See [1].
IIUC, this was because NewRelminMxid was moved from struct LVRelState to 
PHVShared.
So you should update like " vacrel->counters->NewRelminMxid".

2.
I compared parallel heap scan and found that it does not have compute_worker 
API.
Can you clarify the reason why there is an inconsistency?
(I feel it is intentional because the calculation logic seems to depend on the 
heap structure,
so should we add the API for table scan as well?)

[1]:
```
vacuumlazy.c: In function ‘lazy_scan_prune’:
vacuumlazy.c:1666:34: error: ‘LVRelState’ {aka ‘struct LVRelState’} has no 
member named ‘NewRelminMxid’
  Assert(MultiXactIdIsValid(vacrel->NewRelminMxid));
                                  ^~
....
```

Best regards,
Hayato Kuroda
FUJITSU LIMITED

Reply via email to