On Tue, 2005-11-22 at 19:57 +0100, Gabriel Dos Reis wrote:
> Daniel Berlin <[EMAIL PROTECTED]> writes:
> 
> | On Tue, 2005-11-22 at 19:25 +0100, Gabriel Dos Reis wrote:
> | > Benjamin Kosnik  <[EMAIL PROTECTED]> writes:
> | > 
> | > [...]
> | > 
> | > | I'd actually like to make this a requirement, regardless of the option
> | > | chosen.
> | > 
> | > Amen.
> | > 
> | 
> | Uh, IPA of any sort is generally not about speed.
> | It's fine to say compile time performance of the middle end portions ew
> | may replace should be same or better, but algorithms that operate on
> | large portions of the program are over never fast, because they aren't
> | linear.
> | They usually take *at least* seconds per pass.
> | 
> | So you need to quantify "good".  
> 
> 
> As I undestand it, we are going to merge information from different
> translation units for the purpose of link-time optimization.  I expect
> some increase in compile-time there.  I don't care that the algorithms
> are linear or not.  What I do care about is that for the end-result,
> compile-time performance is kept in reasonable bounds -- no matter what
> implementation technology is finally decided on. 

Okay, but you need to understand that reasonable bounds for compiling
the entire program at once are usually 3x-7x more (and in the worst
case, even wore) than doing it seperately.

That is the case with completely state of the art algorithms,
implementation techniques, etc.

It's just the way the world goes.

It's in no way reasonable to expect to be able to perform IPA
optimizations on a 1 million line program in 30 seconds, even if we can
compile it normally in 10 seconds.

--Dan

Reply via email to