https://gcc.gnu.org/bugzilla/show_bug.cgi?id=99581

--- Comment #6 from Vladimir Makarov <vmakarov at gcc dot gnu.org> ---
(In reply to Segher Boessenkool from comment #5)
> Thanks Vladimir.  It is indeed a problem in LRA (or triggered by it).
> We have
>     8: {[r121:DI+low(unspec[`*.LANCHOR0',%2:DI]
> 47+0x92a4)]=asm_operands;clobber
> 
> so this is an offset that is too big for a machine instruction, those can
> take
> -32768..32767.
> 
> Changing the constraint to "m" you get in LRA
>     Inserting insn reload before:
>    13: r121:DI=high(unspec[`*.LANCHOR0',%2:DI] 47+0x92a4)
> 
> but this doesn't happen if you keep it "o", and it dies later.

The problem is that before the patches we used wrongly '=' as a constraint for
memory and after the patches we use (rightly) 'o' as the constraint.

The culprit is a function added by Richard Sandiford for arm64 sve:

commit 1aeffdce2dfe718e1337d75eb4f22c3c300df9bb
Author: Richard Sandiford <richard.sandif...@arm.com>
Date:   Mon Nov 18 15:26:07 2019 +0000

    LRA: handle memory constraints that accept more than "m"

    LRA allows address constraints that are more relaxed than "p":

      /* Target hooks sometimes don't treat extra-constraint addresses as
         legitimate address_operands, so handle them specially.  */
      if (insn_extra_address_constraint (cn)
          && satisfies_address_constraint_p (&ad, cn))
        return change_p;

    For SVE it's useful to allow the same thing for memory constraints.
    The particular use case is LD1RQ, which is an SVE instruction that
    addresses Advanced SIMD vector modes and that accepts some addresses
    that normal Advanced SIMD moves don't.

    Normally we require every memory to satisfy at least "m", which is
    defined to be a memory "with any kind of address that the machine
    supports in general".  However, LD1RQ is very much special-purpose:
    it doesn't really have any relation to normal operations on these
    modes.  Adding its addressing modes to "m" would lead to bad Advanced
    SIMD optimisation decisions in passes like ivopts.  LD1RQ therefore
    has a memory constraint that accepts things "m" doesn't.

...

static bool
valid_address_p (rtx op, struct address_info *ad,
                 enum constraint_num constraint)
{
  address_eliminator eliminator (ad);

  /* Allow a memory OP if it matches CONSTRAINT, even if CONSTRAINT is more     
     forgiving than "m".                                                        
     Need to extract memory from op for special memory constraint,              
     i.e. bcst_mem_operand in i386 backend.  */
  if (MEM_P (extract_mem_from_operand (op))
      && (insn_extra_memory_constraint (constraint)
          || insn_extra_special_memory_constraint (constraint))
=>    && constraint_satisfied_p (op, constraint))
    return true;

  return valid_address_p (ad->mode, *ad->outer, ad->as);
}

He actually added the if-stmt.  And the condition of this if-stmt is true for
our case because constraint_satisfied_p returns true for the memory and
CONSTRAINT_o.  If the condition were false, we would use machine-dependent
legitimate_address_p and it would return false and we would reload the memory
address.

constraint_satisfied_p returns true because **infrastructure** function
offsettable_nonstrict_memref_p returns true for the memory.

So you are right it is not ppc64 target code problem.  But I am stuck right now
how to fix the PR w/o breaking arm sve.  Right now I see only adding a
machine-dependent hook but I don't like it as we already have too many hooks
for RA.

Reply via email to