Greg Hennessy <greg.henne...@gmail.com> writes:

Hi,

>> On Thu, 17 Jul 2025 at 12:44, Greg Hennessy <greg.henne...@gmail.com> wrote:
>>> workers, but there isn't an easy way to get more
>>> workers.
> On 7/16/25 11:01 PM, David Rowley wrote:
>>> Is "alter table ... set (parallel_workers=N);" not easy enough?
>
> It may be easy enough for one table, but that won't work for joins as
> far as I
> can tell. I'd like to have more cpu's available in more cases.

It suppose to work on the join cases. See:

max_parallel_workers=8
max_parallel_workers_per_gather=4;

create table bigt(a int, b int, c int);
insert into bigt select i, i, i from generate_series(1, 1000000)i;
analyze bigt;

explain (costs off) select * from bigt t1 join bigt t2 using(b) where t1.a < 10;
                   QUERY PLAN                   
------------------------------------------------
 Gather
   Workers Planned: 2
   ->  Parallel Hash Join
         Hash Cond: (t2.b = t1.b)
         ->  Parallel Seq Scan on bigt t2
         ->  Parallel Hash
               ->  Parallel Seq Scan on bigt t1
                     Filter: (a < 10)
(8 rows)

alter table bigt set (parallel_workers=4);

explain (costs off) select * from bigt t1 join bigt t2 using(b) where t1.a < 10;
                   QUERY PLAN                   
------------------------------------------------
 Gather
   Workers Planned: 4
   ->  Parallel Hash Join
         Hash Cond: (t2.b = t1.b)
         ->  Parallel Seq Scan on bigt t2
         ->  Parallel Hash
               ->  Parallel Seq Scan on bigt t1
                     Filter: (a < 10)
(8 rows)

However it is possible that when query becomes complex, some characters
could make parallel doesn't work at all.

e.g.  SELECT paralle_unsafe_udf(a) FROM t;
or some correlated subquery in your queries like [1].


[1] https://www.postgresql.org/message-id/871pqzm5wj.fsf%40163.com 

-- 
Best Regards
Andy Fan



Reply via email to