Tim Uckun writes:
>> As I've stated repeatedly, your next move needs to be to increase the
>> stats target, at least for that column if not globally.
> Ok How do I go about doing this.
If you want to do it globally for the whole database: change
default_statistics_target in postgresql.conf. If
>
> As I've stated repeatedly, your next move needs to be to increase the
> stats target, at least for that column if not globally. You probably
> don't need to have it know about every last domain id, but you need to
> have it know about enough that it realizes that domains not included in
> the
Tim Uckun writes:
>> Am I right in guessing that pg_stats.n_distinct is much too low for
>> the domain_id column?
> the domain_id is in the topical urls. A select count of domains shows
> that there are 700 domains, the pg_stats shows 170 which seems kind of
> low but maybe is not out of bounds b
> With a table that large, you're probably going to need a larger stats
> target in order to get reasonable estimates for low-frequency values.
> Am I right in guessing that pg_stats.n_distinct is much too low for
> the domain_id column?
the domain_id is in the topical urls. A select count of doma
Tim Uckun writes:
> relname | pg_relation_size | reltuples | relpages
> +--+-+--
> consolidated_urls | 1303060480 | 1.80192e+06 | 159065
> consolidated_urls_pkey |114745344 | 1.80192e+06 |14007
On Mon, Jan 17, 2011 at 8:23 AM, Tim Uckun wrote:
>> Hmm. What do you get for:
>>
>> SELECT relname, pg_relation_size(oid), reltuples, relpages FROM
>> pg_class WHERE relname IN ('consolidated_urls',
>> 'consolidated_urls_pkey');
>
> relname | pg_relation_size | reltuples | relpa
>
> Hmm. What do you get for:
>
> SELECT relname, pg_relation_size(oid), reltuples, relpages FROM
> pg_class WHERE relname IN ('consolidated_urls',
> 'consolidated_urls_pkey');
>
relname | pg_relation_size | reltuples | relpages
+--+-
On Sun, Jan 16, 2011 at 5:47 PM, Tim Uckun wrote:
>> Hmm, autovacuum *should* have been keeping track of things for you,
>> but it might still be worth doing a manual ANALYZE against that table
>> to see if the estimated rowcount changes. If not, you'll need to raise
>> the statistics target for
>
> Hmm, autovacuum *should* have been keeping track of things for you,
> but it might still be worth doing a manual ANALYZE against that table
> to see if the estimated rowcount changes. If not, you'll need to raise
> the statistics target for that column (and again ANALYZE).
The analyze finish
> Hmm, autovacuum *should* have been keeping track of things for you,
> but it might still be worth doing a manual ANALYZE against that table
> to see if the estimated rowcount changes. If not, you'll need to raise
> the statistics target for that column (and again ANALYZE).
>
I started a manual
Tim Uckun writes:
>> Possibly the table's never been ANALYZEd ... do you have autovacuum
>> enabled?
> I do have autovacuum enabled and I am running 8.4
Hmm, autovacuum *should* have been keeping track of things for you,
but it might still be worth doing a manual ANALYZE against that table
to se
>
> Possibly the table's never been ANALYZEd ... do you have autovacuum
> enabled? If it has been analyzed reasonably recently, then it might be
> necessary to crank up the statistics target to get a better estimate.
> It's difficult to give detailed advice when you haven't mentioned what
> PG ver
Both queries use the same row's estimation and cost is comparable. But
execution time differs huge: 0.044s and 3100s. I think that the cost
of backward index scan is too small.
On 1/15/11, Tom Lane wrote:
> Tim Uckun writes:
>> I reported this in the pgsql-general list and was instructed to send
Tim Uckun writes:
> I reported this in the pgsql-general list and was instructed to send
> the analaze outputs here.
This isn't a bug, it's just a poor choice of plan based on a bad
statistical estimate. The planner is estimating that there are 2643
rows having domain_id = 157, when actually the
I reported this in the pgsql-general list and was instructed to send
the analaze outputs here.
have this query it runs reasonably quickly (but should be quicker IMHO)
SELECT "consolidated_urls".* FROM "consolidated_urls" INNER JOIN "topical_urls"
ON "consolidated_urls".id = "topical_urls".conso
15 matches
Mail list logo