Dear Sawada-san,

Thanks for updating the patch. I have been reviewing and below are comments for 
now.

01. 
Sorry if I forgot something, but is there a reason that
parallel_vacuum_compute_workers() is mandatory? My fresh eye felt that the 
function
can check and regards zero if the API is NULL.

02. table_parallel_vacuum_estimate
We can assert that state is not NULL.

03. table_parallel_vacuum_initialize

Same as 02.. Also, I think we should ensure here that 
table_parallel_vacuum_estimate()
has already been called before, and current assertion might not enough because
others can set random value here. Other functions like table_rescan_tidrange()
checks the internal flag of the data structure, which is good for me. How do you
feel to check pcxt->estimator has non-zero value?

04. table_parallel_vacuum_initialize_worker
Same as 02. Also, can we esure that table_parallel_vacuum_initialize() has been
done?

05. table_parallel_vacuum_collect_dead_items
Same as 02. Also, can we esure that table_parallel_vacuum_initialize_worker()
has been done?

06. table_parallel_vacuum_initialize_worker
Comments atop function needs to be updated.

07. 
While playing, I found that the parallel vacuum worker can be launched more than
pages:

```
postgres=# CREATE TABLE var (id int);
CREATE TABLE
postgres=# INSERT INTO var VALUES (generate_series(1, 10000));
INSERT 0 10000
postgres=# VACUUM (parallel 80, verbose) var ;
INFO:  vacuuming "postgres.public.var"
INFO:  launched 80 parallel vacuum workers for collecting dead tuples (planned: 
80)
INFO:  finished vacuuming "postgres.public.var": index scans: 0
pages: 0 removed, 45 remain, 45 scanned (100.00% of total), 0 eagerly scanned
...
```

I hope the minimum chunk size is a page so that this situation can reduce the 
performance.
How do you feel to cap the value with rel::rd_rel::relpages in 
heap_parallel_vacuum_compute_workers()?
This value is not always up-to-date but seems good candidate.

Best regards,
Hayato Kuroda
FUJITSU LIMITED 

Reply via email to