On Thu, Aug 31, 2023 at 04:20:19PM +0800, Hongyu Wang via Gcc-patches wrote:
> For vector move insns like vmovdqa/vmovdqu, their evex counterparts
> requrire explicit suffix 64/32/16/8. The usage of these instruction
> are prohibited under AVX10_1 or AVX512F, so for AVX2+APX_F we select
> vmovaps/vmovups for vector load/store insns that contains EGPR.

Why not make it dependent on AVX512VL?
I.e. if egpr_p && TARGET_AVX512VL, still use vmovdqu16 or vmovdqa16
and the like, and only if !evex_reg_p && egpr_p && !TARGET_AVX512VL
fall back to what you're doing?
> 
> gcc/ChangeLog:
> 
>       * config/i386/i386.cc (ix86_get_ssemov): Check if egpr is used,
>       adjust mnemonic for vmovduq/vmovdqa.
>       * config/i386/sse.md 
> (*<extract_type>_vinsert<shuffletype><extract_suf>_0):
>       Check if egpr is used, adjust mnemonic for vmovdqu/vmovdqa.
>       (avx_vec_concat<mode>): Likewise, and separate alternative 0 to
>       avx_noavx512f.

        Jakub

Reply via email to