I am not sure I understand this parameter well enough but it’s with a default 
value right now of 1000. I have read Robert’s post 
(http://rhaas.blogspot.com/2018/06/using-forceparallelmode-correctly.html) and 
could play with those parameters, but unsure whether what you are describing 
will unlock this 2GB limit.


From: Vijaykumar Jain <vijaykumarjain.git...@gmail.com>
Sent: Thursday, July 22, 2021 16:32
To: l...@laurent-hasson.com
Cc: Justin Pryzby <pry...@telsasoft.com>; pgsql-performa...@postgresql.org
Subject: Re: Big performance slowdown from 11.2 to 13.3

Just asking, I may be completely wrong.

is this query parallel safe?
can we force parallel workers, by setting low parallel_setup_cost or otherwise 
to make use of scatter gather and Partial HashAggregate(s)?
I am just assuming more workers doing things in parallel, would require less 
disk spill per hash aggregate (or partial hash aggregate ?) and the scatter 
gather at the end.

I did some runs in my demo environment, not with the same query, some group by 
aggregates  with around 25M rows, and it showed reasonable results, not too off.
this was pg14 on ubuntu.




Reply via email to