"H.J. Lu" <hjl.to...@gmail.com> writes:

> With SSE emulation of MMX intrinsics in 64-bit mode,
>
> ---
> __v8qi test ()
> {
>   __v8qi mm0 = {1,2,3,4,5,6,7,8};
>   __v8qi mm1 = {11,22,33,44,55,66,77,88};
>   volatile __m64 x;
>
>   x = _mm_add_pi8 (mm0, mm1);
>
>   return x;
> }
> ---
>
> is compiled into
>
>       movq    .LC0(%rip), %xmm0
>       movq    .LC1(%rip), %xmm1
>       paddb   %xmm1, %xmm0
>       movq    %xmm0, -8(%rsp)
>       movq    -8(%rsp), %xmm0
>       ret
>
> instead of
>
>       movq    .LC1(%rip), %mm0
>       paddb   .LC0(%rip), %mm0
>       movq    %mm0, -8(%rsp)
>       movq    -8(%rsp), %xmm0
>       ret

This is PR target/90503.

        Rainer

-- 
-----------------------------------------------------------------------------
Rainer Orth, Center for Biotechnology, Bielefeld University

Reply via email to