On December 20, 2019 1:41:19 AM GMT+01:00, Jeff Law <l...@redhat.com> wrote:
>On Thu, 2019-12-19 at 17:06 -0600, Qing Zhao wrote:
>> Hi, Dmitry,
>> 
>> Thanks for the responds. 
>> 
>> Yes, routine size only cannot determine the complexity of the
>routine. Different compiler analysis might have different formula with
>multiple parameters to compute its complexity. 
>> 
>> However, the common issue is: when the complexity of a specific
>routine for a specific compiler analysis exceeds a threshold, the
>compiler might consume all the available memory and abort the
>compilation. 
>> 
>> Therefore,  in order to avoid the failed compilation due to out of
>memory, some compilers might set a threshold for the complexity of a
>specific compiler analysis (for example, the more aggressive data flow
>analysis), when the threshold is met, the specific aggressive analysis
>will be turned off for this specific routine. Or the optimization level
>will be lowered for the specific routine (and given a warning during
>compilation time for such adjustment).  
>> 
>> I am wondering whether GCC has such capability? Or any option
>provided to increase or decrease the threshold for some of the common
>analysis (for example, data flow)?
>> 
>There are various places where if we hit a limit, then we throttle
>optimization.  But it's not done consistently or pervasively.
>
>Those limits are typically around things like CFG complexity.

Note we also have (not consistently used) -Wmissed-optimizations which is 
supposed to warn when we run into this kind of limiting telling the user which 
knob he might be able to tune. 

Richard. 

>We do _not_ try to recover after an out of memory error, or anything
>like that.
>
>jeff
>> 

Reply via email to