> So I basically spent most of the time trying to create a reproducible case
> and I can say I failed. I can however reproduce this with specific large
> data set with specific data distribution, but not an artificial one.
The query plans posted that has the statistics prefer Bitmap Index
Scan. T
On 07/26/2018 07:27 AM, Tomas Vondra wrote:
Arcadiy, can you provide plans with parallel query disabled? Or even
better, produce a test case that reproduces this (using synthetic
data, anonymized data or something like that, if needed).
So I basically spent most of the time trying to create a re
On 07/26/2018 10:11 AM, Emre Hasegeli wrote:
Isn't the 23040 just the totalpages * 10 per `return totalpages * 10;`
in bringetbitmap()?
Yes, it is just confusing. The correct value is on one level up of
the tree. It is 204 + 4404 rows removed by index recheck = 4608, so
the estimate is not on
> Isn't the 23040 just the totalpages * 10 per `return totalpages * 10;`
> in bringetbitmap()?
Yes, it is just confusing. The correct value is on one level up of
the tree. It is 204 + 4404 rows removed by index recheck = 4608, so
the estimate is not only 150x but 733x off :(.
The sequential sca
On 26 July 2018 at 04:50, Tomas Vondra wrote:
> My guess is this is the root cause - the estimated number of rows is much
> higher than in practice (3377106 vs. 23040), so at the end the seqscan is
> considered to be slightly cheaper and wins. But the actual row count is
> ~150x lower, making the
Hi,
On 07/25/2018 03:58 PM, Arcadiy Ivanov wrote:
-> Bitmap Index Scan on tradedate_idx
(cost=0.00..231.96 rows=3377106 width=0) (actual time=4.500..4.500
rows=23040 loops=1)
Index Cond: data_table.data ->>
'tradeDate'::text))::numeric >=