>> [1] Occasionally it will store a much longer MCV list, because no values >> was >> sampled exactly once, which triggers a different code path in which all >> seen >> values are put in the MCV and the histogram is NULL. This is not >> reliable, >> as whether the least-sample value is present in the sample once or twice >> is >> pretty brittle. > > And we need a better discussion of risk: Before we generated too few > MCV entried. To what extent might me now generate too many? Which > would be a problem in increased planning time.
(My apologies, I just now found some time to test this further, but I don't have additional results yet) Simon, Earlier, I referenced a thread [1] complaining that we currently already have too many MCVs in the case of uniform distributions, with worse consequences than planning time. Based on my (admittedly quick and dirty) preliminary testing (see attachment from a couple weeks ago), this patch exacerbates that problem, and I was hoping to find a way to fix that. > I have a slight reservaton about whether 1.25x is still a sensible > heuristic. This was also discussed in [1], but no patch came out of it. I was just now turning the formulas discussed there into code, but I'll defer to someone with more expertise. FWIW, I suspect that a solution that doesn't take into account a metric like coefficient of variation will have the wrong behavior sometimes, whether for highly uniform or highly non-uniform distributions. [1] https://www.postgresql.org/message-id/flat/32261.1496611829%40sss.pgh.pa.us#32261.1496611...@sss.pgh.pa.us -John Naylor