On Thu, Nov 5, 2020 at 2:39 PM Alexander Monakov <amona...@ispras.ru> wrote:
>
> On Thu, 5 Nov 2020, Alexander Monakov via Gcc wrote:
>
> > On Thu, 5 Nov 2020, Uros Bizjak via Gcc wrote:
> >
> > > > No, this is not how LEA operates. It needs a memory input operand. The
> > > > above will report "operand type mismatch for 'lea'" error.
> > >
> > > The following will work:
> > >
> > >   asm volatile ("lea (%1), %0" : "=r"(addr) : "r"((uintptr_t)&x));
> >
> > This is the same as a plain move though, and the cast to uintptr_t doesn't
> > do anything, you can simply pass "r"(&x) to the same effect.
> >
> > The main advantage of passing a "fake" memory location for use with lea is
> > avoiding base+offset computation outside the asm. If you're okay with one
> > extra register tied up by the asm, just pass the address to the asm 
> > directly:
> >
> > void foo(__seg_fs int *x)
> > {
> >   asm("# %0 (%1)" :: "m"(x[1]), "r"(&x[1]));
> >   asm("# %0 (%1)" :: "m"(x[0]), "r"(&x[0]));
> > }
>
> Actually, in the original context the asm ties up %rsi no matter what (because
> the operand must be in %rsi to make the call), so the code would just
> pass "S"(&var) for the call alternative and "m"(var) for the native 
> instruction.

Or pass both, "m"(var), and

uintptr_t *p = (uintptr_t *)(uintptr_t) &var;

"m"(*p)  alternatives, similar to what is done in the original patch.
The copy to %rsi can then be a part of the alternative assembly.

Uros.

Reply via email to