Ok, I set all those cost parameters:
# - Planner Cost Constants -

seq_page_cost = 0.0001                  # measured on an arbitrary scale
random_page_cost = 0.0002
cpu_tuple_cost = 0.00001                        # same scale as above
cpu_index_tuple_cost = 0.000005         # same scale as above
cpu_operator_cost = 0.0000025           # same scale as above
parallel_tuple_cost = 0.0001            # same scale as above
parallel_setup_cost = 1.0       # same scale as above
#min_parallel_table_scan_size = 8MB
#min_parallel_index_scan_size = 512kB
effective_cache_size = 2GB

It still has the nested loop on top, but the total cost is now:
GroupAggregate  (cost=20000005652.88..20000005652.90 rows=370 width=68)


On Tue, Nov 17, 2020 at 5:08 PM Frits Jalvingh <j...@etc.to> wrote:

> Ah, sorry, I forgot. I set "hash_mem_multiplier = 2", and after that to
> 20. It did had no effects on the nested loops.
>
> On Tue, Nov 17, 2020 at 4:58 PM Frits Jalvingh <j...@etc.to> wrote:
>
>> Hi Justin, thanks for your help!
>> I have attached both plans, both made with set enable_nestloop = false in
>> the attachments.
>> On the Postgresql 13 server work_mem is 64MB. It cannot really be higher
>> there because Postgresql does not control its use of memory, setting it
>> higher on this VM will cause the OOM killer to kill Postgresql for some
>> queries.
>> On the Postgres 9.6 server we have it way higher, at 5GB (this machine is
>> a monster with about 800GB of RAM).
>>
>> I indeed saw too that the artificial cost for the nested join led to 2x
>> that amount. But that seems to be because there are actually 2 nested joins
>> in there: we use a cross join with a "time" table (which contains just some
>> 28 rows) and that one always seems to need a nested loop (it is present
>> always). So I'm not too certain that that 2x disable_cost is from joins; it
>> seems to be from 2x the nested loop. And I actually wondered whether that
>> would be a cause of the issue, because as far as costs are concerned that
>> second nested loops only _increases_ the cost by 2 times...
>>
>> Regards,
>>
>> Frits
>>
>>
>> On Tue, Nov 17, 2020 at 3:20 PM Justin Pryzby <pry...@telsasoft.com>
>> wrote:
>>
>>> On Tue, Nov 17, 2020 at 02:47:55PM +0100, Frits Jalvingh wrote:
>>> > But lately while migrating to Postgres 13 (from 9.6) we found that
>>> Postgres
>>> > does not (always) obey the enable_nestloop = false setting anymore:
>>> some
>>> >
>>> > The execution plan on Postgres 13.1:
>>>
>>> Could you send the plans under pg13 and pg9.6 as attachments ?
>>>
>>> What is the setting of work_mem ?
>>>
>>> I see the cost is dominated by 2*disable_cost, but I wonder whether the
>>> I/O
>>> cost of hash joins now exceeds that.  Maybe hash_mem_multiplier helps
>>> you?
>>>
>>> GroupAggregate  (cost=20008853763.07..20008853776.02 rows=370 width=68)
>>>
>>>
>>>
>>>   Group Key: (COALESCE(adres_pe.id_s, '-1'::integer)),
>>>
>>>
>>>
>>>
>>> --
>>> Justin
>>>
>>>
>>>

Reply via email to