On Mon, 31 Jul 2023, ??? wrote:
> Yeah. I have tried this case too.
>
> But this case doesn't need to be vectorized as COND_FMA, am I right?
Only when you enable loop masking. Alternatively use
double foo (double *a, double *b, double *c)
{
double result = 0.0;
for (int i = 0; i < 1024; ++i)
result += i & 1 ? __builtin_fma (a[i], b[i], c[i]) : 0.0;
return result;
}
but then for me if-conversion produces
iftmp.0_18 = __builtin_fma (_8, _10, _5);
_ifc__43 = _26 ? iftmp.0_18 : 0.0;
with -ffast-math (probably rightfully so). I then get .FMAs
vectorized and .COND_FMA folded.
> The thing I wonder is that whether this condtion:
>
> if (mask_opno >= 0 && reduc_idx >= 0)
>
> or similar as len
> if (len_opno >= 0 && reduc_idx >= 0)
>
> Whether they are redundant in vectorizable_call ?
>
>
> [email protected]
>
> From: Richard Biener
> Date: 2023-07-31 21:33
> To: [email protected]
> CC: richard.sandiford; gcc-patches
> Subject: Re: Re: [PATCH V2] VECT: Support CALL vectorization for COND_LEN_*
> On Mon, 31 Jul 2023, [email protected] wrote:
>
> > Hi, Richi.
> >
> > >> I think you need to use fma from math.h together with -ffast-math
> > >>to get fma.
> >
> > As you said, this is one of the case I tried:
> > https://godbolt.org/z/xMzrrv5dT
> > GCC failed to vectorize.
> >
> > Could you help me with this?
>
> double foo (double *a, double *b, double *c)
> {
> double result = 0.0;
> for (int i = 0; i < 1024; ++i)
> result += __builtin_fma (a[i], b[i], c[i]);
> return result;
> }
>
> with -mavx2 -mfma -Ofast this is vectorized on x86_64 to
>
> ...
> vect__9.13_27 = MEM <vector(4) double> [(double *)vectp_a.11_29];
> _9 = *_8;
> vect__10.14_26 = .FMA (vect__7.10_30, vect__9.13_27, vect__4.7_33);
> vect_result_17.15_25 = vect__10.14_26 + vect_result_20.4_36;
> ...
>
> but ifcvt still shows
>
> _9 = *_8;
> _10 = __builtin_fma (_7, _9, _4);
> result_17 = _10 + result_20;
>
> still vectorizable_call has IFN_FMA with
>
> /* First try using an internal function. */
> code_helper convert_code = MAX_TREE_CODES;
> if (cfn != CFN_LAST
> && (modifier == NONE
> || (modifier == NARROW
> && simple_integer_narrowing (vectype_out, vectype_in,
> &convert_code))))
> ifn = vectorizable_internal_function (cfn, callee, vectype_out,
> vectype_in);
>
> from CFN_BUILT_IN_FMA
>
>
>
> > Thanks.
> >
> >
> > [email protected]
> >
> > From: Richard Biener
> > Date: 2023-07-31 20:00
> > To: [email protected]
> > CC: richard.sandiford; gcc-patches
> > Subject: Re: Re: [PATCH V2] VECT: Support CALL vectorization for COND_LEN_*
> > On Mon, 31 Jul 2023, [email protected] wrote:
> >
> > > Ok . Thanks Richard.
> > >
> > > Could you give me a case that SVE can vectorize a reduction with FMA?
> > > Meaning it will go into vectorize_call and vectorize FMA into COND_FMA ?
> > >
> > > I tried many times to reproduce such cases but I failed.
> >
> > I think you need to use fma from math.h together with -ffast-math
> > to get fma.
> >
> > Richard.
> >
> > > Thanks.
> > >
> > >
> > > [email protected]
> > >
> > > From: Richard Sandiford
> > > Date: 2023-07-31 18:19
> > > To: Juzhe-Zhong
> > > CC: gcc-patches; rguenther
> > > Subject: Re: [PATCH V2] VECT: Support CALL vectorization for COND_LEN_*
> > > Juzhe-Zhong <[email protected]> writes:
> > > > Hi, Richard and Richi.
> > > >
> > > > Base on the suggestions from Richard:
> > > > https://gcc.gnu.org/pipermail/gcc-patches/2023-July/625396.html
> > > >
> > > > This patch choose (1) approach that Richard provided, meaning:
> > > >
> > > > RVV implements cond_* optabs as expanders. RVV therefore supports
> > > > both IFN_COND_ADD and IFN_COND_LEN_ADD. No dummy length arguments
> > > > are needed at the gimple level.
> > > >
> > > > Such approach can make codes much cleaner and reasonable.
> > > >
> > > > Consider this following case:
> > > > void foo (float * __restrict a, float * __restrict b, int * __restrict
> > > > cond, int n)
> > > > {
> > > > for (int i = 0; i < n; i++)
> > > > if (cond[i])
> > > > a[i] = b[i] + a[i];
> > > > }
> > > >
> > > >
> > > > Output of RISC-V (32-bits) gcc (trunk) (Compiler #3)
> > > > <source>:5:21: missed: couldn't vectorize loop
> > > > <source>:5:21: missed: not vectorized: control flow in loop.
> > > >
> > > > ARM SVE:
> > > >
> > > > ...
> > > > mask__27.10_51 = vect__4.9_49 != { 0, ... };
> > > > ...
> > > > vec_mask_and_55 = loop_mask_49 & mask__27.10_51;
> > > > ...
> > > > vect__9.17_62 = .COND_ADD (vec_mask_and_55, vect__6.13_56,
> > > > vect__8.16_60, vect__6.13_56);
> > > >
> > > > For RVV, we want IR as follows:
> > > >
> > > > ...
> > > > _68 = .SELECT_VL (ivtmp_66, POLY_INT_CST [4, 4]);
> > > > ...
> > > > mask__27.10_51 = vect__4.9_49 != { 0, ... };
> > > > ...
> > > > vect__9.17_60 = .COND_LEN_ADD (mask__27.10_51, vect__6.13_55,
> > > > vect__8.16_59, vect__6.13_55, _68, 0);
> > > > ...
> > > >
> > > > Both len and mask of COND_LEN_ADD are real not dummy.
> > > >
> > > > This patch has been fully tested in RISC-V port with supporting both
> > > > COND_* and COND_LEN_*.
> > > >
> > > > And also, Bootstrap and Regression on X86 passed.
> > > >
> > > > OK for trunk?
> > > >
> > > > gcc/ChangeLog:
> > > >
> > > > * internal-fn.cc (FOR_EACH_LEN_FN_PAIR): New macro.
> > > > (get_len_internal_fn): New function.
> > > > (CASE): Ditto.
> > > > * internal-fn.h (get_len_internal_fn): Ditto.
> > > > * tree-vect-stmts.cc (vectorizable_call): Support CALL
> > > > vectorization with COND_LEN_*.
> > > >
> > > > ---
> > > > gcc/internal-fn.cc | 46 ++++++++++++++++++++++
> > > > gcc/internal-fn.h | 1 +
> > > > gcc/tree-vect-stmts.cc | 87 +++++++++++++++++++++++++++++++++++++-----
> > > > 3 files changed, 125 insertions(+), 9 deletions(-)
> > > >
> > > > diff --git a/gcc/internal-fn.cc b/gcc/internal-fn.cc
> > > > index 8e294286388..379220bebc7 100644
> > > > --- a/gcc/internal-fn.cc
> > > > +++ b/gcc/internal-fn.cc
> > > > @@ -4443,6 +4443,52 @@ get_conditional_internal_fn (internal_fn fn)
> > > > }
> > > > }
> > > >
> > > > +/* Invoke T(IFN) for each internal function IFN that also has an
> > > > + IFN_COND_LEN_* or IFN_MASK_LEN_* form. */
> > > > +#define FOR_EACH_LEN_FN_PAIR(T)
> > > > \
> > > > + T (MASK_LOAD, MASK_LEN_LOAD)
> > > > \
> > > > + T (MASK_STORE, MASK_LEN_STORE)
> > > > \
> > > > + T (MASK_GATHER_LOAD, MASK_LEN_GATHER_LOAD)
> > > > \
> > > > + T (MASK_SCATTER_STORE, MASK_LEN_SCATTER_STORE)
> > > > \
> > > > + T (COND_ADD, COND_LEN_ADD)
> > > > \
> > > > + T (COND_SUB, COND_LEN_SUB)
> > > > \
> > > > + T (COND_MUL, COND_LEN_MUL)
> > > > \
> > > > + T (COND_DIV, COND_LEN_DIV)
> > > > \
> > > > + T (COND_MOD, COND_LEN_MOD)
> > > > \
> > > > + T (COND_RDIV, COND_LEN_RDIV)
> > > > \
> > > > + T (COND_FMIN, COND_LEN_FMIN)
> > > > \
> > > > + T (COND_FMAX, COND_LEN_FMAX)
> > > > \
> > > > + T (COND_MIN, COND_LEN_MIN)
> > > > \
> > > > + T (COND_MAX, COND_LEN_MAX)
> > > > \
> > > > + T (COND_AND, COND_LEN_AND)
> > > > \
> > > > + T (COND_IOR, COND_LEN_IOR)
> > > > \
> > > > + T (COND_XOR, COND_LEN_XOR)
> > > > \
> > > > + T (COND_SHL, COND_LEN_SHL)
> > > > \
> > > > + T (COND_SHR, COND_LEN_SHR)
> > > > \
> > > > + T (COND_NEG, COND_LEN_NEG)
> > > > \
> > > > + T (COND_FMA, COND_LEN_FMA)
> > > > \
> > > > + T (COND_FMS, COND_LEN_FMS)
> > > > \
> > > > + T (COND_FNMA, COND_LEN_FNMA)
> > > > \
> > > > + T (COND_FNMS, COND_LEN_FNMS)
> > >
> > > With the earlier patch to add DEF_INTERNAL_COND_FN and
> > > DEF_INTERNAL_SIGNED_COND_FN, I think we should use those to handle
> > > the COND_* cases, rather than putting them in this macro.
> > >
> > > Thanks,
> > > Richard
> > >
> > >
> >
> >
>
>
--
Richard Biener <[email protected]>
SUSE Software Solutions Germany GmbH,
Frankenstrasse 146, 90461 Nuernberg, Germany;
GF: Ivo Totev, Andrew McDonald, Werner Knoblich; (HRB 36809, AG Nuernberg)