https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523

--- Comment #41 from Richard Biener <rguenth at gcc dot gnu.org> ---
(In reply to Segher Boessenkool from comment #38)
> (In reply to Richard Biener from comment #36)
[...]
> But linear is linear, and stays linear, for way too big code it is just as
> acceptable as for "normal" code.  Just slow.  If you don't want the compiler
> to
> take a long time compiling your way too big code, use -O0, or preferably do
> not
> write insane code in the first place :-)

;)  We promise to try to behave reasonably with insane code, but
technically we tell people to use at most -O1 for that.  That will
at least avoid trying three and four insn combinations.

[...]

> Ideally we'll not do *any* artificial limitations.

I agree.  And we should try hard to fix actual algorithmic problems if
they exist before resorting to limits.

>  But GCC just throws its hat
> in the ring in other cases as well, say, too big RA problems.  You do get not
> as high quality code as wanted, but at least you get something compiled in
> an acceptable timeframe :-)

Yep.  See above for my comment about -O1.  I think it's fine to take
time (and memory) to optimize high quality code at -O2.  And if you
throw insane code to GCC then also an insane amount of time and memory ;)

So I do wonder whether with -O1 the issue is gone anyway already?

If not then for the sake of -O1 and insane we want such limit.  It can
be more crude aka just count all attempts and stop alltogether, or like
PRE, just not PRE when the number of pseudos/blocks crosses a magic barrier.
I just thought combine is a bit a too core part of our instruction selection
so disabling it completely (after some point) would be too bad even for
insane code ...

Andreas - can you try --param max-combine-insns=2 please?  That is I think
what -O1 uses and only then does two-insn combinations.

Reply via email to