På lørdag 07. januar 2017 kl. 17:48:49, skrev Tom Lane <t...@sss.pgh.pa.us 
<mailto:t...@sss.pgh.pa.us>>:
Andreas Joseph Krogh <andr...@visena.com> writes:
 > I wonder; In general, is there any downside of having join_collapse_limit = 
14
 > on modern hardware (32 cores, 64GB RAM), and geqo_threshold=16 ?
 > I'm aware of it increasing planning-time, but is this really an issue in
 > practice?

 It can be.  The number of possible join orders is exponential in the join
 size (at least 2^n, maybe worse, don't recall at the moment).  There are
 heuristics in the planner that usually let it avoid investigating large
 parts of that space; but the only thing that will certainly limit the
 problem is preventing large join subproblems from being considered in the
 first place --- which is what join_collapse_limit/from_collapse_limit are
 for.

 Depending on what your queries are like, you might not ever hit any of the
 bad cases, so it might be fine.  But I'd be nervous about throwing a wide
 mix of complex queries at a planner set with high thresholds.

 If you've got just one problem query, it might be worth your time to take
 note of the optimal join order (as seen in EXPLAIN when a good plan is
 chosen) and rearrange the query so that the syntactic join order matches
 that.  Then you should get the same plan even when join_collapse_limit is
 small.
 
Thanks.
Will geqo kick if if join_collapse_limit =4 and there are many more joins or 
will the planner just stop trying to rearrange JOINs after this limit?
In other words; Is join_collapse_limit=1 the only way or might it work for 
other "small" values as well?
 
In any case; I assume you're suggesting doing something like "SET LOCAL 
join_collapse_limit=<some-value>" in the same transaction so this doesn't have 
to be a system-wide setting?
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com <mailto:andr...@visena.com>
www.visena.com <https://www.visena.com>
 <https://www.visena.com>


 

Reply via email to