Hi

út 23. 2. 2021 v 10:36 odesílatel Alexander Voytsekhovskyy <
young.in...@gmail.com> napsal:

> Thanks for suggestion
>
> with tmp tables there are another issue - there are already 5-6 tables and
> 2-3 IN filters. If i will replace them with tmp tables it may hit query
> planner limits and it will become to produce terrible query plans, for
> example when genetic query optimizer starts
>

you can increase these limits - they are relatively low, and can be
increased on modern CPU.

https://www.postgresql.org/docs/current/runtime-config-query.html

Regards

Pavel


> On Tue, Feb 23, 2021 at 1:45 AM Ron <ronljohnso...@gmail.com> wrote:
>
>> On 2/22/21 9:00 AM, Oleksandr Voytsekhovskyy wrote:
>>
>> Greetings,
>>
>> We have queries with IN filters with long list of INT values
>>
>> Sometimes, they running extremely slow, and I have found suggestion to
>> use syntax
>>
>> Field IN (VALUES(1465), (1478), ...
>>
>> Instead of
>>
>> Field IN (1465, 1478, ...
>>
>> On some cases it helps, but on other it makes query running 1000+ times
>> slower
>>
>>
>> Insert the values into a TEMPORARY TABLE, then join that to your main
>> table?
>>
>>
>> So the issue is:
>> - some queries much faster with simple IN
>> - some queries much faster with IN + VALUES
>>
>> Here is explain on case when it’s extremely slow:
>>
>> ->  HashAggregate  (cost=5.78..9.62 rows=385 width=4)
>>        Group Key: ""*VALUES*"".column1"
>>                    ->  Values Scan on ""*VALUES*""  (cost=0.00..4.81 
>> rows=385 width=4)"
>>
>> What is the right way to pass long INT values list to IN filter?
>>
>> I am using PostgreSQL 13.1 on Ubuntu
>>
>>
>> --
>> Angular momentum makes the world go 'round.
>>
>

Reply via email to