https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113441

--- Comment #35 from Richard Sandiford <rsandifo at gcc dot gnu.org> ---
Maybe I've misunderstood the flow of the ticket, but it looks to me like we do
still correctly recognise the truncating scatter stores.  And, on their own, we
would be able to convert them into masked scatters.

The reason for the epilogue is instead on the load side.  There we have a
non-strided grouped load, and currently we hard-code the assumption that it is
better to use contiguous loads and permutes rather than gather loads where
possible.  So we have:

      /* As a last resort, trying using a gather load or scatter store.

         ??? Although the code can handle all group sizes correctly,
         it probably isn't a win to use separate strided accesses based
         on nearby locations.  Or, even if it's a win over scalar code,
         it might not be a win over vectorizing at a lower VF, if that
         allows us to use contiguous accesses.  */
      if (*memory_access_type == VMAT_ELEMENTWISE
          && single_element_p
          && loop_vinfo
          && vect_use_strided_gather_scatters_p (stmt_info, loop_vinfo,
                                                 masked_p, gs_info))
        *memory_access_type = VMAT_GATHER_SCATTER;

only after we've tried and failed to use load lanes or load+permute.  If
instead I change the order so that the code above is tried first, then we do
use extending gather loads and truncating scatter stores as before, with no
epilogue loop.

So I suppose the question is: if we do prefer to use gathers over load+permute
for some cases, how do we decide which to use?  And can it be done a per-load
basis, or should it instead be a per-loop decision?  E.g., if we end up with a
loop that needs peeling for gaps, perhaps we should try again and forbid
peeling for gaps.  Then, if that succeeds, see which loop gives the better
overall cost.

Of course, trying more things means more compile timeā€¦

Reply via email to