I have encountered serious under-estimations of distinct values when
values are not evenly distributed but clustered within a column. I think
this problem might be relevant to many real-world use cases and I wonder
if there is a good workaround or possibly a programmatic solution that
could be implemented.
Thanks for your help!
Stefan
*The Long Story:*
When Postgres collects statistics, it estimates the number of distinct
values for every column (see pg_stats.n_distinct). This is one important
source for the planner to determine the selectivity and hence can have
great influence on the resulting query plan.
My Problem:
When I collected statistics on some columns that have rather high
selectivity but not anything like unique values, I consistently got
n_distinct values that are far too low (easily by some orders of
magnitude). Worse still, the estimates did not really improve until I
analyzed the whole table.
I tested this for Postgres 9.1 and 9.2. An artificial test-case is
described at the end of this mail.
Some Analysis of the Problem:
Unfortunately it is not trivial to estimate the total number of
different values based on a sample. As far as I found out, Postgres
uses an algorithm that is based on the number of values that are found
only once in the sample used for ANALYZE. I found references to
Good-Turing frequency estimation
(http://encodestatistics.org/publications/statistics_and_postgres.pdf)
and to a paper from Haas & Stokes, Computer Science, 1996
(http://researcher.ibm.com/researcher/files/us-phaas/jasa3rj.pdf). The
latter source is from Josh Berkus in a 2005 discussion on the Postgres
Performance List (see e.g.
http://grokbase.com/t/postgresql/pgsql-performance/054kztf8pf/bad-n-distinct-estimation-hacks-suggested
for a look on the whole discussion there). The formula given there for
the total number of distinct values is:
n*d / (n - f1 + f1*n/N)
where f1 is the number of values that occurred only once in the sample.
n is the number of rows sampled, d the number of distincts found and N
the total number of rows in the table.
Now, the 2005 discussion goes into great detail on the advantages and
disadvantages of this algorithm, particularly when using small sample
sizes, and several alternatives are discussed. I do not know whether
anything has been changed after that, but I know that the very distinct
problem, which I will focus on here, still persists.
When the number of values that are found only once in the sample (f1)
becomes zero, the whole term equals d, that is, n_distinct is estimated
to be just the number of distincts found in the sample.
This is basically fine as it should only happen when the sample has
really covered more or less all distinct values. However, we have a
sampling problem here: for maximum efficiency Postgres samples not
random rows but random pages. If the distribution of the values is not
random but clustered (that is, the same values tend to be close
together) we run into problems. The probability that any value from a
clustered distribution is sampled only once, when any page covers
multiple adjacent rows, is very low.
So, under these circumstances, the estimate for n_distinct will always
be close to the number of distincts found in the sample. Even if every
value would in fact only appear a few times in the table.
Relevance:
I think this is not just an unfortunate border case, but a very common
situation. Imagine two tables that are filled continually over time
where the second table references the first - some objects and multiple
properties for each for example. Now the foreign key column of the
second table will have many distinct values but a highly clustered
distribution. It is probably not helpful, if the planner significantly
underestimates the high selectivity of the foreign key column.
Workarounds:
There are workarounds: manually setting table column statistics or using
an extremely high statistics target, so that the whole table gets
analyzed. However, these workarounds do not seem elegant and may be
impractical.
Questions:
A) Did I find the correct reason for my problem? Specifically, does
Postgres really estimate n_distinct as described above?
B) Are there any elegant workarounds?
C) What could be a programmatic solution to this problem? I think, it
might be possible to use the number of values that are found in only one
page (vs. found only once at all) for f1. Or the number of distincts
could be calculated using some completely different approach?
Test Case:
For an artificial test-case let's create a table and fill it with 10
million rows (appr. 1,300 MB required). There is an ID column featuring
unique values and 4 groups of 3 columns each that have selectivities of:
- 5 (x_2000k = 2,000,000 distinct values)
- 25 (x_400k = 400,000 distinct values)
- 125 (x_80k = 80,000 distinct values).
The 4 groups of columns show different distributions:
- clust