On Mon, Oct 1, 2012 at 12:50 AM, Vladimir Makarov <vmaka...@redhat.com> wrote: > As I wrote, I don't see that LRA has a problem right now because even on > 8GB machine, GCC with LRA is 10% faster than GCC with reload with real time > point of view (not saying that LRA generates 15% smaller code). And real > time is what really matters for users.
For me, those compile times I reported *are* real times. But you are right that the test case is a bit extreme. Before GCC 4.8 other parts of the compiler also choked on it. Still, the test case comes from real user's code (combination of Eigen library with MPFR), and it shows scalability problems in LRA (and IRA) that one can't just "explain away" with an "RA is just expensive" claim. The test case for PR26854 is Brad Lucier's Scheme interpreter, that is also real user's code. FWIW, I had actually expected IRA to extremely well on this test case because IRA is supposed to be a regional allocator and I had expected that would help for scalability. But most of the region data structures in IRA are designed to hold whole functions (e.g. several per region arrays of size max_reg_num / max_insn_uid / ...) and that appears to be a problem for IRA's memory foot print. Perhaps something similar is going on with LRA? Ciao! Steven