https://gcc.gnu.org/bugzilla/show_bug.cgi?id=57600

--- Comment #6 from Marc Glisse <glisse at gcc dot gnu.org> ---
(In reply to alalaw01 from comment #5)
> Can you give an example where it not only doesn't help, but actually hurts?

I don't remember at all what I was talking about. I can imagine that if we are
in a branch predicated by i < n1, the compiler has an easy time turning
i<n1&&i<n2 into just i<n2, but it has a harder time turning i<min(n1,n2) into
i<n2, for instance, and that can block a whole line of further optimizations.

> Are they all just because of not seeing analysis properties, i.e. we could
> get there by realizing a<=min(a,...) and looking far enough to see a<X<=Y
> means a<Y ?

I guess it is always possible to add enough knowledge of this special case in
various places in gcc to avoid most regressions. I don't have enough data to
judge how hard it would be to make the transformation a win on average. It
might be that it is already more often beneficial than detrimental, for all I
know...

> In terms of code generation,

I am more worried about the high-level optimizations we may miss, but note that
the number of comparisons is not necessarily the right metric, the comparison
a<b may be completely unpredictable while t<a and t<b are always true.

Reply via email to