http://gcc.gnu.org/bugzilla/show_bug.cgi?id=57748

--- Comment #25 from Richard Biener <rguenth at gcc dot gnu.org> ---
I think that

      tem = get_inner_reference (to, &bitsize, &bitpos, &offset, &mode1,
                                 &unsignedp, &volatilep, true);

      if (TREE_CODE (to) == COMPONENT_REF
          && DECL_BIT_FIELD_TYPE (TREE_OPERAND (to, 1)))
        get_bit_range (&bitregion_start, &bitregion_end, to, &bitpos, &offset);

      /* If we are going to use store_bit_field and extract_bit_field,
         make sure to_rtx will be safe for multiple use.  */
      mode = TYPE_MODE (TREE_TYPE (tem));
      if (TREE_CODE (tem) == MEM_REF
          && mode != BLKmode
          && ((align = get_object_alignment (tem))
              < GET_MODE_ALIGNMENT (mode))
          && ((icode = optab_handler (movmisalign_optab, mode))
              != CODE_FOR_nothing))

is overly pessimistic.  What matters is the mode of the access, not that
of the access base object!

Which means the pre-handling of

  /* Handle misaligned stores.  */
  mode = TYPE_MODE (TREE_TYPE (to));
  if ((TREE_CODE (to) == MEM_REF
       || TREE_CODE (to) == TARGET_MEM_REF)
      && mode != BLKmode
      && !mem_ref_refers_to_non_mem_p (to)

should be made to trigger for all 'to', not just bare MEM_REF/TARGET_MEM_REF.
Then the other movmisalign path can be completely removed.

Reply via email to