On Wed, Dec 05, 2007 at 06:49:00PM +0100, Guillaume Smet wrote: > On Dec 5, 2007 3:26 PM, Greg Sabino Mullane <[EMAIL PROTECTED]> wrote: > > Agreed, this would be a nice 8.4 thing. But what about 8.3 and 8.2? Is > > there a reason not to make this change? I know I've been lazy and not run > > any absolute figures, but rough tests show that raising it (from 10 to > > 100) results in a very minor increase in analyze time, even for large > > databases. I think the burden of a slightly slower analyze time, which > > can be easily adjusted, both in postgresql.conf and right before running > > an analyze, is very small compared to the pain of some queries - which > > worked > > before - suddenly running much, much slower for no apparent reason at all. > > As Tom stated it earlier, the ANALYZE slow down is far from being the > only consequence. The planner will also have more work to do and > that's the hard point IMHO.
How much more? Doesn't it now use a binary search? If so, ISTM that going from 10 to 100 would at worst double the time spent finding the bucket we need. Considering that we're talking something that takes microseconds, and that there's a huge penalty to be paid if you have bad stats estimates, that doesn't seem that big a deal. And on modern machines it's not like the additional space in the catalogs is going to kill us. FWIW, I've never seen anything but a performance increase or no change when going from 10 to 100. In most cases there's a noticeable improvement since it's common to have over 100k rows in a table, and there's just no way to capture any kind of a real picture of that with only 10 buckets. -- Decibel!, aka Jim C. Nasby, Database Architect [EMAIL PROTECTED] Give your computer some brain candy! www.distributed.net Team #1828
pgpTmTmwysHI5.pgp
Description: PGP signature