https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65363

Richard Biener <rguenth at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|UNCONFIRMED                 |NEW
   Last reconfirmed|                            |2015-03-10
     Ever confirmed|0                           |1

--- Comment #1 from Richard Biener <rguenth at gcc dot gnu.org> ---
FRE can only eliminate the dominated one (obviously), so the first one is the
one prevailing.  Not removing the redundant load is obviously worse ;)

The only possibility I see is to somehow derive alignment information from
the always-executed second load and propagate that backwards.  For example
with VRP if you see

  tem_1 = MEM [ptr_2];

in a basic-block and the access tells you that ptr_2 is aligned, thus
ptr_2 & 3 == 0 for example, you put an assert for that _at the beginning_
of the basic block.  And you'd then make sure to adjust memory accesses
to "apply" alignment you computed for SSA names (which may be just temporarily
available during VRP but no longer after we removed the asserts).

Not sure if that is an important optimization in practice.  And of course
FRE runs before VRP so we'd need to do this quite early on...

Also consider

  v2 = ((misaligned_t *)a)->a[i];
  if (a & 15 == 0)
    v = ((aligned_t *)a)->a[i];

where we may not do this but FRE still will replace the aligned load.

Reply via email to