On Wed, 3 Apr 2002, Tom Lane wrote:
>
> I'm confused. Your examples show the planner correctly estimating the
> indexscan as much cheaper than the seqscan.
>...
> Cut-and-paste mistake here somewhere, perhaps? The plan refers to fact
> not fact_by_dat.
My apologies... It was indeed doing the
Ron Mayer <[EMAIL PROTECTED]> writes:
> I did quite a bit more playing with this, and no matter what the
> correlation was (1, -0.001), it never seemed to have any effect
> at all on the execution plan.
> Should it? With a high correlation the index scan is a much better choice.
I'm confused.
On Tue, 26 Mar 2002, Tom Lane wrote:
> Ron Mayer <[EMAIL PROTECTED]> writes:
> >> I'm particularly interested in the correlation estimate for the dat
> >> column. (Would you happen to have an idea whether the data has been
> >> inserted more-or-less in dat order?)
>
> > I beleve much of February
First off, thanks to everyone on the list who suggested useful workarounds
to me - and I wanted to start off by saying that with the workarounds my
application is working wonderfully again.
Anyway, here's some more information about the "=" vs. "<= and >=" question
I had earlier today...
On Tu
Ron Mayer <[EMAIL PROTECTED]> writes:
>> I'm particularly interested in the correlation estimate for the dat
>> column. (Would you happen to have an idea whether the data has been
>> inserted more-or-less in dat order?)
> I beleve much of February was loaded first, then we back-filled January,
>
I had an issue where my index was not always used on a very large table.
The issue came down to the data distribution and not pulling in enough of
a random sample to get an accurate estimate ( I think the default max value
was around 3000 sample rows ( 300 * 10 default_samples -- see analyze.c
On Tue, 26 Mar 2002, Tom Lane wrote:
>
> Ron Mayer <[EMAIL PROTECTED]> writes:
> > [...] pretty large, PostgreSQL suddenly stopped using indexes [...]
> [...]
>
> 212K estimate for 180K real is not bad at all. So the problem is in the
> cost models not the initial row count estimation.
>
> If yo
Ron Mayer <[EMAIL PROTECTED]> writes:
> Once some of my tables started getting pretty large, PostgreSQL
> suddenly stopped using indexes when I use expressions like "col = value"
> decreasing performance by 20X.
Hmm. The EXPLAIN shows that the planner is not doing too badly at
estimating the n
In porting a pretty large (10s of millions of records) data warehouse
from Oracle to PostgreSQL,
Once some of my tables started getting pretty large, PostgreSQL
suddenly stopped using indexes when I use expressions like "col = value"
decreasing performance by 20X. This meant that my daily