On Mon, Dec 7, 2015 at 8:35 AM, Jim Nasby <jim.na...@bluetreble.com> wrote:
> On 12/6/15 10:38 AM, Tom Lane wrote: > >> I said "in most cases". You can find example cases to support almost any >> weird planner optimization no matter how expensive and single-purpose; >> but that is the wrong way to think about it. What you have to think about >> is average cases, and in particular, not putting a drag on planning time >> in cases where no benefit ensues. We're not committing any patches that >> give one uncommon case an 1100X speedup by penalizing every other query >> 10%, >> or even 1%; especially not when there may be other ways to fix it. >> > > This is a problem that seriously hurts Postgres in data warehousing > applications. We can't keep ignoring optimizations that provide even as > little as 10% execution improvements for 10x worse planner performance, > because in a warehouse it's next to impossible for planning time to matter. > > Obviously it'd be great if there was a fast, easy way to figure out > whether a query would be expensive enough to go the whole 9 yards on > planning it but at this point I suspect a simple GUC would be a big > improvement. Something like "enable_equivalencefilters" but that defaults to false unlike every one existing "enable_*" GUC? It would be a lot more user-friendly to have something along the lines of "planner_mode (text)" with labels like "star, transactional, bulk_load, etc..." because I suspect there are other things we'd want to add if we start identifying queries by their type/usage and optimize accordingly. Having the knobs available is necessary but putting on a façade would make the user more straight-forward for the common cases. David J.