On Mon, Oct 1, 2012 at 9:51 PM, Vladimir Makarov <vmaka...@redhat.com> wrote:
>> I think it's more important in this case to recognize Steven's real
>> point, which is that for an identical situation (IRA), and with an
>> identical patch author, we had similar bugs.  They were promised to be
>> worked on, and yet some of those regressions are still very much with
>> us.
>
> That is not true.  I worked on many compiler time regression bugs. I remeber
> one serious degradation of compilation time on all_cp2k_gfortran.f90.  I
> solved the problem and make IRA working faster and generating much better
> code than the old RA.
>
> http://blog.gmane.org/gmane.comp.gcc.patches/month=20080501/page=15
>
> About other two mentioned PRs by Steven:
>
> PR26854.  I worked on this bug even when IRA was on the branch and make
> again GCC with IRA 5% faster on this test than GCC with the old RA.
>
> PR 54146 is 3 months old.  There were a lot work on other optimizations
> before IRA became important.  It happens only 2 months ago. I had no time to
> work on it but I am going to.

This is also not quite true, see PR37448, which shows the problems as
the test case for PR54146.

I just think scalability is a very important issue. If some pass or
algorithm scales bad on some measure, then users _will_ run into that
at some point and report bugs about it (if you're lucky enough to have
a user patient enough to sit out the long compile time :-) ). Also,
good scalability opens up opportunities. For example, historically GCC
has been conservative on inlining heuristics to avoid compile time
explosions. I think it's better to address the causes of that
explosion and to avoid introducing new potential bottlenecks.


> People sometimes see that RA takes a lot of compilation time but it is in
> the nature of RA.  I'd recommend first to check how the old RA behaves and
> then call it a degradation.

There's no question that RA is one of the hardest problems the
compiler has to solve, being NP-complete and all that. I like LRA's
iterative approach, but if you know you're going to solve a hard
problem with a number potentially expensive iterations, there's even
more reason to make scalability a design goal!

As I said earlier in this thread, I was really looking forward to IRA
at the time you worked on it, because it is supposed to be a regional
allocator and I had expected that to mean it could, well, allocate
per-region which is usually very helpful for scalability (partition
your function and insert compensation code on strategically picked
region boundaries). But that's not what IRA has turned out to be.
(Instead, its regional nature is one of the reasons for its
scalability problems.)  IRA is certainly not worse than old global.c
in very many ways, and LRA looks like a well thought-through and
welcome replacement of old reload. But scalability is an issue in the
design of IRA and LRA looks to be the same in that regard.

Ciao!
Steven

Reply via email to