On Thu, Nov 07, 2013 at 09:48:46AM +0100, Richard Biener wrote:
> > --- gcc/tree-vrp.c.jj       2013-11-06 08:48:01.000000000 +0100
> > +++ gcc/tree-vrp.c  2013-11-06 09:32:19.205104029 +0100
> > @@ -92,6 +92,42 @@ static sbitmap *live;
> >  static bool
> >  live_on_edge (edge e, tree name)
> >  {
> > +  if (live[e->dest->index] == NULL)
> > +    {
> 
> I'd rather have live[] computed "correctly" than the following
> which I think is a hack ... as you say the issue is ordering
> (or rather that there isn't an "order" for CFG cycles unless
> we want to iterate).  For loop BB order we explicitely handle
> the latch, maybe just using a different order than
> RPO order, with special-casing the latch, makes more sense?

But is there any order that would help?  Sure, we could say just tweak
the rpo/bb_rpo arrays and put there latch bbs before any other bbs from the
same loop, but that would only help us in calling find_assert_locations_1
on the latch before other bbs in the loop.  But that call does nothing,
and we'd need to special case handling of the live for the latch block
anyway, just not in live_on_edge, but in find_assert_locations instead.

Is that what what you are looking for?

Because without that special casing of handling latch edges, you'd need to
process the header rather than latch first, and propagate through DFS_BACK
edges, but then you couldn't propagate through other edges, nor in this
testcase would be the latch live computed when processing the header which
has there the condition.

        Jakub

Reply via email to