On Sat, Feb 02, 2019 at 09:12:12AM -0800, H.J. Lu wrote:
> On Sat, Feb 2, 2019 at 9:07 AM Florian Weimer <f...@deneb.enyo.de> wrote:
> >
> > * H. J. Lu:
> >
> > > 1. MMX maskmovq and SSE2 maskmovdqu aren't equivalent.  We emulate MMX
> > > maskmovq with SSE2 maskmovdqu by zeroing out the upper 64 bits of the
> > > mask operand.  A warning is issued since invalid memory access may
> > > happen when bits 64:127 at memory location are unmapped:
> > >
> > > xmmintrin.h:1168:3: note: Emulate MMX maskmovq with SSE2 maskmovdqu may 
> > > result i
> > > n invalid memory access
> > >  1168 |   __builtin_ia32_maskmovq ((__v8qi)__A, (__v8qi)__N, __P);
> > >       |   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >
> > Would it be possible to shift the mask according to the misalignment in
> > the address?  I think this should allow avoiding crossing a page
> > boundary if the orginal 64-bit load would not.
> 
> I guess it is possible.  But it may be quite a bit complex for for no
> apparent gains
> since we also need to shift the implicit memory address.
> 

I updated MMX maskmovq emulation to handle it:

https://gcc.gnu.org/ml/gcc-patches/2019-02/msg00139.html

and added tests to verify that unmapped bits 64:127 at memory address
are properly handled:

https://gcc.gnu.org/ml/gcc-patches/2019-02/msg00140.html


H.J.

Reply via email to