On Mon, Feb 4, 2019 at 12:08 PM Jakub Jelinek <ja...@redhat.com> wrote:
>
> On Mon, Feb 04, 2019 at 12:04:04PM +0100, Richard Biener wrote:
> > On Mon, Feb 4, 2019 at 10:10 AM Uros Bizjak <ubiz...@gmail.com> wrote:
> > >
> > > On Fri, Feb 1, 2019 at 10:18 PM H.J. Lu <hjl.to...@gmail.com> wrote:
> > > >
> > > > On x86-64, since __m64 is returned and passed in XMM registers, we can
> > > > implement MMX intrinsics with SSE instructions. To support it, we 
> > > > disable
> > > > MMX by default in 64-bit mode so that MMX registers won't be available
> > > > with x86-64.  Most of MMX instructions have equivalent SSE versions and
> > > > results of some SSE versions need to be reshuffled to the right order
> > > > for MMX.  Thee are couple tricky cases:
> > >
> > > I don't think we have to disable MMX registers, but we have to tune
> > > register allocation preferences to not allocate MMX register unless
> > > really necessary. In practice, this means to change y constraints to
> > > *y when TARGET_MMX_WITH_SSE is active (probably using enable
> > > attribute). This would solve problem with assembler clobbers that Andi
> > > exposed.
> >
> > But is "unless really necessary" good enough to not have it wrongly
> > under any circumstance?  I actually like HJs patch (not looked at the
>
> Or we could disable MMX registers unless they are referenced in inline asm
> (clobbers or constraints).
>
> Anyway, is the patch set meant for GCC9 or GCC10?  I'd say it would be quite
> dangerous to change this in GCC9.

No, this relatively invasive patchset is definitely meant for GCC10.

Uros.

Reply via email to