On Wed, Jul 20, 2022 at 4:37 AM Hongtao Liu <crazy...@gmail.com> wrote:
>
> On Tue, Jul 19, 2022 at 5:37 PM Uros Bizjak <ubiz...@gmail.com> wrote:
> >
> > On Tue, Jul 19, 2022 at 8:56 AM Hongtao Liu <crazy...@gmail.com> wrote:
> > >
> > > On Tue, Jul 19, 2022 at 2:35 PM Uros Bizjak via Gcc-patches
> > > <gcc-patches@gcc.gnu.org> wrote:
> > > >
> > > > On Tue, Jul 19, 2022 at 8:07 AM liuhongt <hongtao....@intel.com> wrote:
> > > > >
> > > > > And split it after reload.
> > > > >
> > > > > > You will need ix86_binary_operator_ok insn constraint here with
> > > > > > corresponding expander using ix86_fixup_binary_operands_no_copy to
> > > > > > prepare insn operands.
> > > > > Split define_expand with just register_operand, and allow
> > > > > memory/immediate in define_insn, assume combine/forwprop will do 
> > > > > optimization.
> > > >
> > > > But you will *ease* the job of the above passes if you use
> > > > ix86_fixup_binary_operands_no_copy in the expander.
> > > for -m32, it will hit ICE in
> > > Breakpoint 1, ix86_fixup_binary_operands_no_copy (code=XOR,
> > > mode=E_V4QImode, operands=0x7fffffffa970) a
> > > /gcc/config/i386/i386-expand.cc:1184
> > > 1184      rtx dst = ix86_fixup_binary_operands (code, mode, operands);
> > > (gdb) n
> > > 1185      gcc_assert (dst == operands[0]); -- here
> > > (gdb)
> > >
> > > the original operands[0], operands[1], operands[2] are below
> > > (gdb) p debug_rtx (operands[0])
> > > (mem/c:V4QI (plus:SI (reg/f:SI 77 virtual-stack-vars)
> > >         (const_int -8220 [0xffffffffffffdfe4])) [0 MEM <vector(4)
> > > unsigned char> [(unsigned char *)&tmp2 + 4B]+0 S4 A32])
> > > $1 = void
> > > (gdb) p debug_rtx (operands[1])
> > > (subreg:V4QI (reg:SI 129) 0)
> > > $2 = void
> > > (gdb) p debug_rtx (operands[2])
> > > (subreg:V4QI (reg:SI 98 [ _46 ]) 0)
> > > $3 = void
> > > (gdb)
> > >
> > > since operands[0] is mem and not equal to operands[1],
> > > ix86_fixup_binary_operands will create a pseudo register for dst. and
> > > then hit ICE.
> > > Is this a bug or assumed?
> >
> > You will need ix86_expand_binary_operator here.
> It will swap memory operand from op1 to op2 and hit ICE for unrecognized insn.
>
> What about this?

Still no good... You are using commutative operands, so the predicate
of operand 2 should also allow memory. So, the predicate should be
nonimmediate_or_x86_64_const_vector_operand. The intermediate insn
pattern should look something like *<any_or:code><mode>_1, but with
added XMM and MMX reg alternatives instead of mask regs.

Uros.

>
> -(define_insn "<code><mode>3"
> -  [(set (match_operand:VI_16_32 0 "register_operand" "=?r,x,x,v")
> +(define_expand "<code><mode>3"
> +  [(set (match_operand:VI_16_32 0 "nonimmediate_operand")
>          (any_logic:VI_16_32
> -         (match_operand:VI_16_32 1 "register_operand" "%0,0,x,v")
> -         (match_operand:VI_16_32 2 "register_operand" "r,x,x,v")))
> -   (clobber (reg:CC FLAGS_REG))]
> +         (match_operand:VI_16_32 1 "nonimmediate_operand")
> +         (match_operand:VI_16_32 2
> "register_or_x86_64_const_vector_operand")))]
>    ""
> +{
> +  rtx dst = ix86_fixup_binary_operands (<CODE>, <MODE>mode, operands);
> +  if (MEM_P (operands[2]))
> +    operands[2] = force_reg (<MODE>mode, operands[2]);
> +  rtx op = gen_rtx_SET (dst, gen_rtx_fmt_ee (<CODE>, <MODE>mode,
> +                                            operands[1], operands[2]));
> +  rtx clob = gen_rtx_CLOBBER (VOIDmode, gen_rtx_REG (CCmode, FLAGS_REG));
> +  emit_insn (gen_rtx_PARALLEL (VOIDmode, gen_rtvec (2, op, clob)));
> +  if (dst != operands[0])
> +    emit_move_insn (operands[0], dst);
> +   DONE;
> +})
> +
>
> >
> > Uros.
>
>
>
> --
> BR,
> Hongtao

Reply via email to