Florian Weimer <[EMAIL PROTECTED]> writes:
> I've done that. Fortunately, ANALYZE time didn't increase by that
> much, compared to the default (by just a factor of 10).
With really high stats times you also have to keep an eye on planning time.
The extra data in the stats table can cause plan
* Tom Lane:
> Yeah, n_distinct estimation from a sample is inherently hard :-(. Given
> that you have such a long tail on the distribution, it might be worth
> your while to crank the stats target for that column all the way to the
> maximum (1000).
I've done that. Fortunately, ANALYZE time did
Florian Weimer <[EMAIL PROTECTED]> writes:
>> Maybe larger stats targets would help.
> I've set default_statistics_target to 100 and rerun ANALYZE on that
> table. The estimate went down to 43108 (and the hash join is still
> the preferred plan). ANALZE with default_statistics_target = 200
> (wh
* Tom Lane:
> Florian Weimer <[EMAIL PROTECTED]> writes:
>> -> Bitmap Index Scan on large_rel_1_field_1 (cost=0.00..2003.09
>> rows=193739 width=0) (actual time=0.148..0.148 rows=12 loops=1)
>>Index Cond: (n.field_1 = "outer".field_2)
>
> What you need to look into is w
Florian Weimer <[EMAIL PROTECTED]> writes:
> -> Bitmap Index Scan on large_rel_1_field_1 (cost=0.00..2003.09
> rows=193739 width=0) (actual time=0.148..0.148 rows=12 loops=1)
>Index Cond: (n.field_1 = "outer".field_2)
What you need to look into is why that rowcount esti
Hi,
for this simple join of two tables,
SELECT * FROM large_rel n, smaller_rel a
WHERE n.field_1 = a.field_2 AND a.key = '127.0.0.1';
PostgreSQL 8.1.4 chooses an extremely bad query plan:
Hash Join (cost=283.45..8269374.38 rows=14137 width=94)
Hash Cond: ("outer".field_1 = "inner".field_