Sorry for the slow response.

Hao Liu OS <h...@os.amperecomputing.com> writes:
>> Ah, thanks.  In that case, Hao, I think we can avoid the ICE by changing:
>>
>>   if ((kind == scalar_stmt || kind == vector_stmt || kind == vec_to_scalar)
>>       && vect_is_reduction (stmt_info))
>>
>> to:
>>
>>   if ((kind == scalar_stmt || kind == vector_stmt || kind == vec_to_scalar)
>>       && STMT_VINFO_LIVE_P (stmt_info)
>>       && vect_is_reduction (stmt_info))
>
> I  tried this and it indeed can avoid ICE.  But it seems the 
> reduction_latency calculation is also skipped, after such modification, the 
> redunction_latency is 0 for this case. Previously, it is 1 and 2 for scalar 
> and vector separately.

Which test case do you see this for?  The two tests in the patch still
seem to report correct latencies for me if I make the change above.

Thanks,
Richard

> IMHO, to keep it consistent with previous result, should we move 
> STMT_VINFO_LIVE_P check below and inside the if? such as:
>
>   /* Calculate the minimum cycles per iteration imposed by a reduction
>      operation.  */
>   if ((kind == scalar_stmt || kind == vector_stmt || kind == vec_to_scalar)
>       && vect_is_reduction (stmt_info))
>     {
>       unsigned int base
>         = aarch64_in_loop_reduction_latency (m_vinfo, stmt_info, m_vec_flags);
>       if (STMT_VINFO_LIVE_P (stmt_info) && STMT_VINFO_FORCE_SINGLE_CYCLE (
>             info_for_reduction (m_vinfo, stmt_info)))
>         /* ??? Ideally we'd use a tree to reduce the copies down to 1 vector,
>            and then accumulate that, but at the moment the loop-carried
>            dependency includes all copies.  */
>         ops->reduction_latency = MAX (ops->reduction_latency, base * count);
>       else
>         ops->reduction_latency = MAX (ops->reduction_latency, base);
>
> Thanks,
> Hao
>
> ________________________________________
> From: Richard Sandiford <richard.sandif...@arm.com>
> Sent: Wednesday, July 26, 2023 17:14
> To: Richard Biener
> Cc: Hao Liu OS; GCC-patches@gcc.gnu.org
> Subject: Re: [PATCH] AArch64: Do not increase the vect reduction latency by 
> multiplying count [PR110625]
>
> Richard Biener <richard.guent...@gmail.com> writes:
>> On Wed, Jul 26, 2023 at 4:02 AM Hao Liu OS via Gcc-patches
>> <gcc-patches@gcc.gnu.org> wrote:
>>>
>>> > When was STMT_VINFO_REDUC_DEF empty?  I just want to make sure that we're 
>>> > not papering over an issue elsewhere.
>>>
>>> Yes, I also wonder if this is an issue in vectorizable_reduction.  Below is 
>>> the the gimple of "gcc.target/aarch64/sve/cost_model_13.c":
>>>
>>>   <bb 3>:
>>>   # res_18 = PHI <res_15(7), 0(6)>
>>>   # i_20 = PHI <i_16(7), 0(6)>
>>>   _1 = (long unsigned int) i_20;
>>>   _2 = _1 * 2;
>>>   _3 = x_14(D) + _2;
>>>   _4 = *_3;
>>>   _5 = (unsigned short) _4;
>>>   res.0_6 = (unsigned short) res_18;
>>>   _7 = _5 + res.0_6;                             <-- The current stmt_info
>>>   res_15 = (short int) _7;
>>>   i_16 = i_20 + 1;
>>>   if (n_11(D) > i_16)
>>>     goto <bb 7>;
>>>   else
>>>     goto <bb 4>;
>>>
>>>   <bb 7>:
>>>   goto <bb 3>;
>>>
>>> It looks like that STMT_VINFO_REDUC_DEF should be "res_18 = PHI <res_15(7), 
>>> 0(6)>"?
>>> The status here is:
>>>   STMT_VINFO_REDUC_IDX (stmt_info): 1
>>>   STMT_VINFO_REDUC_TYPE (stmt_info): TREE_CODE_REDUCTION
>>>   STMT_VINFO_REDUC_VECTYPE (stmt_info): 0x0
>>
>> Not all stmts in the SSA cycle forming the reduction have
>> STMT_VINFO_REDUC_DEF set,
>> only the last (latch def) and live stmts have at the moment.
>
> Ah, thanks.  In that case, Hao, I think we can avoid the ICE by changing:
>
>   if ((kind == scalar_stmt || kind == vector_stmt || kind == vec_to_scalar)
>       && vect_is_reduction (stmt_info))
>
> to:
>
>   if ((kind == scalar_stmt || kind == vector_stmt || kind == vec_to_scalar)
>       && STMT_VINFO_LIVE_P (stmt_info)
>       && vect_is_reduction (stmt_info))
>
> instead of using a null check.
>
> I see that vectorizable_reduction calculates a reduc_chain_length.
> Would it be OK to store that in the stmt_vec_info?  I suppose the
> AArch64 code should be multiplying by that as well.  (It would be a
> separate patch from this one though.)
>
> Richard
>
>
>>
>> Richard.
>>
>>> Thanks,
>>> Hao
>>>
>>> ________________________________________
>>> From: Richard Sandiford <richard.sandif...@arm.com>
>>> Sent: Tuesday, July 25, 2023 17:44
>>> To: Hao Liu OS
>>> Cc: GCC-patches@gcc.gnu.org
>>> Subject: Re: [PATCH] AArch64: Do not increase the vect reduction latency by 
>>> multiplying count [PR110625]
>>>
>>> Hao Liu OS <h...@os.amperecomputing.com> writes:
>>> > Hi,
>>> >
>>> > Thanks for the suggestion.  I tested it and found a gcc_assert failure:
>>> >     gcc.target/aarch64/sve/cost_model_13.c (internal compiler error: in 
>>> > info_for_reduction, at tree-vect-loop.cc:5473)
>>> >
>>> > It is caused by empty STMT_VINFO_REDUC_DEF.
>>>
>>> When was STMT_VINFO_REDUC_DEF empty?  I just want to make sure that
>>> we're not papering over an issue elsewhere.
>>>
>>> Thanks,
>>> Richard
>>>
>>>   So, I added an extra check before checking single_defuse_cycle. The 
>>> updated patch is below.  Is it OK for trunk?
>>> >
>>> > ---
>>> >
>>> > The new costs should only count reduction latency by multiplying count for
>>> > single_defuse_cycle.  For other situations, this will increase the 
>>> > reduction
>>> > latency a lot and miss vectorization opportunities.
>>> >
>>> > Tested on aarch64-linux-gnu.
>>> >
>>> > gcc/ChangeLog:
>>> >
>>> >       PR target/110625
>>> >       * config/aarch64/aarch64.cc (count_ops): Only '* count' for
>>> >       single_defuse_cycle while counting reduction_latency.
>>> >
>>> > gcc/testsuite/ChangeLog:
>>> >
>>> >       * gcc.target/aarch64/pr110625_1.c: New testcase.
>>> >       * gcc.target/aarch64/pr110625_2.c: New testcase.
>>> > ---
>>> >  gcc/config/aarch64/aarch64.cc                 | 13 ++++--
>>> >  gcc/testsuite/gcc.target/aarch64/pr110625_1.c | 46 +++++++++++++++++++
>>> >  gcc/testsuite/gcc.target/aarch64/pr110625_2.c | 14 ++++++
>>> >  3 files changed, 69 insertions(+), 4 deletions(-)
>>> >  create mode 100644 gcc/testsuite/gcc.target/aarch64/pr110625_1.c
>>> >  create mode 100644 gcc/testsuite/gcc.target/aarch64/pr110625_2.c
>>> >
>>> > diff --git a/gcc/config/aarch64/aarch64.cc b/gcc/config/aarch64/aarch64.cc
>>> > index 560e5431636..478a4e00110 100644
>>> > --- a/gcc/config/aarch64/aarch64.cc
>>> > +++ b/gcc/config/aarch64/aarch64.cc
>>> > @@ -16788,10 +16788,15 @@ aarch64_vector_costs::count_ops (unsigned int 
>>> > count, vect_cost_for_stmt kind,
>>> >      {
>>> >        unsigned int base
>>> >       = aarch64_in_loop_reduction_latency (m_vinfo, stmt_info, 
>>> > m_vec_flags);
>>> > -
>>> > -      /* ??? Ideally we'd do COUNT reductions in parallel, but 
>>> > unfortunately
>>> > -      that's not yet the case.  */
>>> > -      ops->reduction_latency = MAX (ops->reduction_latency, base * 
>>> > count);
>>> > +      if (STMT_VINFO_REDUC_DEF (stmt_info)
>>> > +       && STMT_VINFO_FORCE_SINGLE_CYCLE (
>>> > +         info_for_reduction (m_vinfo, stmt_info)))
>>> > +     /* ??? Ideally we'd use a tree to reduce the copies down to 1 
>>> > vector,
>>> > +        and then accumulate that, but at the moment the loop-carried
>>> > +        dependency includes all copies.  */
>>> > +     ops->reduction_latency = MAX (ops->reduction_latency, base * count);
>>> > +      else
>>> > +     ops->reduction_latency = MAX (ops->reduction_latency, base);
>>> >      }
>>> >
>>> >    /* Assume that multiply-adds will become a single operation.  */
>>> > diff --git a/gcc/testsuite/gcc.target/aarch64/pr110625_1.c 
>>> > b/gcc/testsuite/gcc.target/aarch64/pr110625_1.c
>>> > new file mode 100644
>>> > index 00000000000..0965cac33a0
>>> > --- /dev/null
>>> > +++ b/gcc/testsuite/gcc.target/aarch64/pr110625_1.c
>>> > @@ -0,0 +1,46 @@
>>> > +/* { dg-do compile } */
>>> > +/* { dg-options "-Ofast -mcpu=neoverse-n2 -fdump-tree-vect-details 
>>> > -fno-tree-slp-vectorize" } */
>>> > +/* { dg-final { scan-tree-dump-not "reduction latency = 8" "vect" } } */
>>> > +
>>> > +/* Do not increase the vector body cost due to the incorrect reduction 
>>> > latency
>>> > +    Original vector body cost = 51
>>> > +    Scalar issue estimate:
>>> > +      ...
>>> > +      reduction latency = 2
>>> > +      estimated min cycles per iteration = 2.000000
>>> > +      estimated cycles per vector iteration (for VF 2) = 4.000000
>>> > +    Vector issue estimate:
>>> > +      ...
>>> > +      reduction latency = 8      <-- Too large
>>> > +      estimated min cycles per iteration = 8.000000
>>> > +    Increasing body cost to 102 because scalar code would issue more 
>>> > quickly
>>> > +      ...
>>> > +    missed:  cost model: the vector iteration cost = 102 divided by the 
>>> > scalar iteration cost = 44 is greater or equal to the vectorization 
>>> > factor = 2.
>>> > +    missed:  not vectorized: vectorization not profitable.  */
>>> > +
>>> > +typedef struct
>>> > +{
>>> > +  unsigned short m1, m2, m3, m4;
>>> > +} the_struct_t;
>>> > +typedef struct
>>> > +{
>>> > +  double m1, m2, m3, m4, m5;
>>> > +} the_struct2_t;
>>> > +
>>> > +double
>>> > +bar (the_struct2_t *);
>>> > +
>>> > +double
>>> > +foo (double *k, unsigned int n, the_struct_t *the_struct)
>>> > +{
>>> > +  unsigned int u;
>>> > +  the_struct2_t result;
>>> > +  for (u = 0; u < n; u++, k--)
>>> > +    {
>>> > +      result.m1 += (*k) * the_struct[u].m1;
>>> > +      result.m2 += (*k) * the_struct[u].m2;
>>> > +      result.m3 += (*k) * the_struct[u].m3;
>>> > +      result.m4 += (*k) * the_struct[u].m4;
>>> > +    }
>>> > +  return bar (&result);
>>> > +}
>>> > diff --git a/gcc/testsuite/gcc.target/aarch64/pr110625_2.c 
>>> > b/gcc/testsuite/gcc.target/aarch64/pr110625_2.c
>>> > new file mode 100644
>>> > index 00000000000..7a84aa8355e
>>> > --- /dev/null
>>> > +++ b/gcc/testsuite/gcc.target/aarch64/pr110625_2.c
>>> > @@ -0,0 +1,14 @@
>>> > +/* { dg-do compile } */
>>> > +/* { dg-options "-Ofast -mcpu=neoverse-n2 -fdump-tree-vect-details 
>>> > -fno-tree-slp-vectorize" } */
>>> > +/* { dg-final { scan-tree-dump "reduction latency = 8" "vect" } } */
>>> > +
>>> > +/* The reduction latency should be multiplied by the count for
>>> > +   single_defuse_cycle.  */
>>> > +
>>> > +long
>>> > +f (long res, short *ptr1, short *ptr2, int n)
>>> > +{
>>> > +  for (int i = 0; i < n; ++i)
>>> > +    res += (long) ptr1[i] << ptr2[i];
>>> > +  return res;
>>> > +}

Reply via email to