AI Rumman wrote:
> Why is the following query getting wrong estimation of rows?
> I am using Postgresql 9.2.1 with default_statistics_target = 100.
> I execute vacuum analyze each night.
> Hash Join (cost=14067.90..28066.53 rows=90379 width=26) (actual
> time=536.009..1772.910 rows=337139 loop
Hello Dieter,
If you are asking more than about 20% of the rows the optimizer will choose
to do a seq scan and actually that's the right thing to do. On the second
example of yours the rows here less and that's why it chose to go with the
index.
you can force an index scan by changing the optimizer
HiTom,
thanks for your reply. It was the sequential scan on table user (about 1
million rows), which really surprised me. But a sequential scan over 1 million
users seems to be more efficient than an index-Scan for 41.000 rows.
If a execute the query with the ID of a competiton with less partic
Hi Igor,
thanks for the reply. The sequential scan on user_2_competition wasn't my
main-problem. What really suprised me was the sequential scan on table user,
which is a sequential scan over one million rows.
Hash Left Join (cost=111357.64..126222.29 rows=41396 width=42) (actual
time=1982.5