On 6/15/07, Eric Botcazou <[EMAIL PROTECTED]> wrote:
> Please, just look at those charts
>
> https://vmakarov.108.redhat.com/nonav/spec/comparison.html
>
> The compilation speed decrease without a performance improving (at least
> for the default case) is really scary.
Right, I also found those charts a bit depressing, given the time and energy
that have been put in the compiler since GCC 3.2.3.  For example, it seems
that the Tree-SSA infrastructure has brought very little benefit in terms of
performance in the generic case, in exchange for a massive dump of new code.

Does anyone have the beginning of an idea as to why this is so?
Did GCC hit a
fundamental wall some time ago, for example because of its portability?

No, GCC hit a fundamental wall because its backend was not modern.
The code we generate out of tree-ssa is in general, as good or better
than other compilers generate out of their middle ends.

The problem remaining is that they have much better backends then us.

Until dataflow, *nobody* had done any sort of major undertaking to
make the *entire* backend actually modern in any way, shape or form.

On the other hand, those efforts have not been lost, since the compiler is now
much more modern in terms of infrastructure and algorithms.  However, before
triggering the next internal earthquake (namely LTO), we should probably try
to understand what's going on (or not going on).
This is not really rocket science, to be honest.
No matter how good of code you generate out of your middle end
(including LTO->middle end), if your backend does a crappy job of code
generation, you will end up with crappy code.

Our backends have done roughly the same good/bad job of generating
code since, well, forever.  Until that changes, you will see that the
only performance improvements you get in the general case are things
that the backend was too dumb to get in the first place (IE loads,
stores)

--Dan

Reply via email to