https://gcc.gnu.org/bugzilla/show_bug.cgi?id=69823

Richard Biener <rguenth at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|NEW                         |ASSIGNED
           Assignee|unassigned at gcc dot gnu.org      |rguenth at gcc dot 
gnu.org

--- Comment #8 from Richard Biener <rguenth at gcc dot gnu.org> ---
So here we're adding constraints from a condition that has ops defined by a
load
and a preceeding loop in the region has stores.  Thus

tree
scalar_evolution_in_region (const sese_l &region, loop_p loop, tree t)
{
...
  bool has_vdefs = false;
  if (invariant_in_sese_p_rec (t, region, &has_vdefs))
    return t;

  /* T variates in REGION.  */
  if (has_vdefs)
    return chrec_dont_know;

triggers.  I thought dependence analysis should handle this case.

But it looks like we never check if any stmt in this loop is harmful as
stmts outside of loops fully contained in the SESE are not checked by
harmful_loop_in_region called by merge_sese.  The root has its loop->header
== ENTRY so no SESE region can ever cover it.

Hum.  It looks like get_dominated_to_depth fails to return all BBs of the
region...  looks like a bogus interface to use for this.  If you consider
a region

    /\
   /  \
  / \  \
  \ /  /
   \  /
    \/

then depth of entry is 1 and that of exit is 2 but inside the region we
have nodes with depth 3.

I suppose using a worklist (we check for region membership anyway) fixes this.

Reply via email to