Sorry, I should have said 'vacuum analyze verbose'...
On Sat, 2005-02-26 at 00:45 -0500, Greg Stark wrote:
> Mike Harding <[EMAIL PROTECTED]> writes:
>
> > The following was run -immediately- after a vacuum.
>
> You realize "vacuum" doesn't update the statistics, right?
> You have to do "analyz
Mike Harding <[EMAIL PROTECTED]> writes:
> The following was run -immediately- after a vacuum.
You realize "vacuum" doesn't update the statistics, right?
You have to do "analyze" or "vacuum analyze" for that.
--
greg
---(end of broadcast)---
Mike Harding <[EMAIL PROTECTED]> writes:
> Any way to adjust n_distinct to be more accurate?
You could try increasing the statistics target for the relevant columns.
What does pg_stats show for the "numdistinct" estimates of the columns
you are grouping over, and does that have anything to do with
Any way to adjust n_distinct to be more accurate?
I don't think a 'disk spill' would be that bad, if you could re-sort the
hash in place. If nothing else, if it could -fail- when it reaches the
lower stratosphere, and re-start, it's faster than getting no result at
all... sort of an auto disable
Mike Harding <[EMAIL PROTECTED]> writes:
> I've been having problems where a HashAggregate is used because of a bad
> estimate of the distinct number of elements involved.
If you're desperate, there's always enable_hashagg. Or reduce sort_mem
enough so that even the misestimate looks like it will