Richard Guenther wrote:
> On 11/4/06, Kenneth Zadeck <[EMAIL PROTECTED]> wrote:
>> I think that it is time that we in the GCC community took some time to
>> address the problem of compiling very large functions in a somewhat
>> systematic manner.
>>
>> GCC has two competing interests here:  it needs to be able to provide
>> state of the art optimization for modest sized functions and it needs to
>> be able to properly process very large machine generated functions using
>> reasonable resources.
>>
>> I believe that the default behavior for the compiler should be that
>> certain non essential passes be skipped if a very large function is
>> encountered.
>>
>> There are two problems here:
>>
>> 1) defining the set of optimizations that need to be skipped.
>> 2) defining the set of functions that trigger the special processing.
>>
>>
>> For (1) I would propose that three measures be made of each function.
>> These measures should be made before inlining occurs. The three measures
>> are the number of variables, the number of statements, and the number of
>> basic blocks.
>
> Why before inlining?  These three numbers can change quite significantly
> as a function passes through the pass pipeline.  So we should try to keep
> them up-to-date to have an accurate measurement.
>
I am flexible here. We may want inlining to be able to update the
numbers.  However, I think that we should drive the inlining agression
based on these numbers. 
> Otherwise the proposal sounds reasonable but we should make sure the
> limits we impose allow reproducible compilations for N x M cross
> configurations and native compilation on different sized machines.
>
I do not want to get into the game where we are looking at the size of
the machine and making this decision.  Doing that would make it hard to
reproduce bugs that come in from the field.  Thus, I think that the
limits (or functions) should be platform independent.

> Richard.

Reply via email to