> On 21. Aug 2019, at 20:26, Jeff Janes wrote:
>
> As noted elsewhere, v12 thwarts your attempts to deliberately design the bad
> estimates. You can still get them, you just have to work a bit harder at it:
>
> CREATE FUNCTION j (bigint, bigint) returns setof bigint as $$ select
> generate
Hi,
> On 20. Aug 2019, at 19:32, Andres Freund wrote:
>
> Hi,
>
> On 2019-08-20 17:11:58 +0200, Felix Geisendörfer wrote:
>>
>> HashAggregate (cost=80020.01..100020.01 rows=200 width=8) (actual
>> time=19.349..23.123 rows=1 loops=1)
>
> FWIW, t
Hi all,
today I debugged a query that was executing about 100x slower than expected,
and was very surprised by what I found.
I'm posting to this list to see if this might be an issue that should be fixed
in PostgreSQL itself.
Below is a simplified version of the query in question:
SET work_me
s://github.com/felixge/pg-slow-gin/blob/master/pg-slow-gin.ipynb
<https://github.com/felixge/pg-slow-gin/blob/master/pg-slow-gin.ipynb>
Please help me understand what causes the O(N^2) performance for query 1 and if
query 2 is the best way to work around this issue.
Thanks
Felix Geisendör