> -----Original Message-----
> From: Richard Biener <rguent...@suse.de>
> Sent: Monday, November 6, 2023 2:25 PM
> To: Tamar Christina <tamar.christ...@arm.com>
> Cc: gcc-patches@gcc.gnu.org; nd <n...@arm.com>
> Subject: Re: [PATCH v6 0/21]middle-end: Support early break/return auto-
> vectorization
> 
> On Mon, 6 Nov 2023, Tamar Christina wrote:
> 
> > Hi All,
> >
> > This patch adds initial support for early break vectorization in GCC.
> > The support is added for any target that implements a vector cbranch
> > optab, this includes both fully masked and non-masked targets.
> >
> > Depending on the operation, the vectorizer may also require support
> > for boolean mask reductions using Inclusive OR.  This is however only
> > checked then the comparison would produce multiple statements.
> >
> > Note: I am currently struggling to get patch 7 correct in all cases and 
> > could
> use
> >       some feedback there.
> >
> > Concretely the kind of loops supported are of the forms:
> >
> >  for (int i = 0; i < N; i++)
> >  {
> >    <statements1>
> >    if (<condition>)
> >      {
> >        ...
> >        <action>;
> >      }
> >    <statements2>
> >  }
> >
> > where <action> can be:
> >  - break
> >  - return
> >  - goto
> >
> > Any number of statements can be used before the <action> occurs.
> >
> > Since this is an initial version for GCC 14 it has the following
> > limitations and
> > features:
> >
> > - Only fixed sized iterations and buffers are supported.  That is to say any
> >   vectors loaded or stored must be to statically allocated arrays with known
> >   sizes. N must also be known.  This limitation is because our primary 
> > target
> >   for this optimization is SVE.  For VLA SVE we can't easily do cross page
> >   iteraion checks. The result is likely to also not be beneficial. For that
> >   reason we punt support for variable buffers till we have First-Faulting
> >   support in GCC.
> > - any stores in <statements1> should not be to the same objects as in
> >   <condition>.  Loads are fine as long as they don't have the possibility to
> >   alias.  More concretely, we block RAW dependencies when the intermediate
> value
> >   can't be separated fromt the store, or the store itself can't be moved.
> > - Prologue peeling, alignment peelinig and loop versioning are supported.
> > - Fully masked loops, unmasked loops and partially masked loops are
> > supported
> > - Any number of loop early exits are supported.
> > - No support for epilogue vectorization.  The only epilogue supported is the
> >   scalar final one.  Peeling code supports it but the code motion code 
> > cannot
> >   find instructions to make the move in the epilog.
> > - Early breaks are only supported for inner loop vectorization.
> >
> > I have pushed a branch to refs/users/tnfchris/heads/gcc-14-early-break
> >
> > With the help of IPA and LTO this still gets hit quite often.  During
> > bootstrap it hit rather frequently.  Additionally TSVC s332, s481 and
> > s482 all pass now since these are tests for support for early exit
> vectorization.
> >
> > This implementation does not support completely handling the early
> > break inside the vector loop itself but instead supports adding checks
> > such that if we know that we have to exit in the current iteration
> > then we branch to scalar code to actually do the final VF iterations which
> handles all the code in <action>.
> >
> > For the scalar loop we know that whatever exit you take you have to
> > perform at most VF iterations.  For vector code we only case about the
> > state of fully performed iteration and reset the scalar code to the 
> > (partially)
> remaining loop.
> >
> > That is to say, the first vector loop executes so long as the early
> > exit isn't needed.  Once the exit is taken, the scalar code will
> > perform at most VF extra iterations.  The exact number depending on peeling
> and iteration start and which
> > exit was taken (natural or early).   For this scalar loop, all early exits 
> > are
> > treated the same.
> >
> > When we vectorize we move any statement not related to the early break
> > itself and that would be incorrect to execute before the break (i.e.
> > has side effects) to after the break.  If this is not possible we decline to
> vectorize.
> >
> > This means that we check at the start of iterations whether we are
> > going to exit or not.  During the analyis phase we check whether we
> > are allowed to do this moving of statements.  Also note that we only
> > move the scalar statements, but only do so after peeling but just before we
> start transforming statements.
> >
> > Codegen:
> >
> > for e.g.
> >
> > #define N 803
> > unsigned vect_a[N];
> > unsigned vect_b[N];
> >
> > unsigned test4(unsigned x)
> > {
> >  unsigned ret = 0;
> >  for (int i = 0; i < N; i++)
> >  {
> >    vect_b[i] = x + i;
> >    if (vect_a[i] > x)
> >      break;
> >    vect_a[i] = x;
> >
> >  }
> >  return ret;
> > }
> >
> > We generate for Adv. SIMD:
> >
> > test4:
> >         adrp    x2, .LC0
> >         adrp    x3, .LANCHOR0
> >         dup     v2.4s, w0
> >         add     x3, x3, :lo12:.LANCHOR0
> >         movi    v4.4s, 0x4
> >         add     x4, x3, 3216
> >         ldr     q1, [x2, #:lo12:.LC0]
> >         mov     x1, 0
> >         mov     w2, 0
> >         .p2align 3,,7
> > .L3:
> >         ldr     q0, [x3, x1]
> >         add     v3.4s, v1.4s, v2.4s
> >         add     v1.4s, v1.4s, v4.4s
> >         cmhi    v0.4s, v0.4s, v2.4s
> >         umaxp   v0.4s, v0.4s, v0.4s
> >         fmov    x5, d0
> >         cbnz    x5, .L6
> >         add     w2, w2, 1
> >         str     q3, [x1, x4]
> >         str     q2, [x3, x1]
> >         add     x1, x1, 16
> >         cmp     w2, 200
> >         bne     .L3
> >         mov     w7, 3
> > .L2:
> >         lsl     w2, w2, 2
> >         add     x5, x3, 3216
> >         add     w6, w2, w0
> >         sxtw    x4, w2
> >         ldr     w1, [x3, x4, lsl 2]
> >         str     w6, [x5, x4, lsl 2]
> >         cmp     w0, w1
> >         bcc     .L4
> >         add     w1, w2, 1
> >         str     w0, [x3, x4, lsl 2]
> >         add     w6, w1, w0
> >         sxtw    x1, w1
> >         ldr     w4, [x3, x1, lsl 2]
> >         str     w6, [x5, x1, lsl 2]
> >         cmp     w0, w4
> >         bcc     .L4
> >         add     w4, w2, 2
> >         str     w0, [x3, x1, lsl 2]
> >         sxtw    x1, w4
> >         add     w6, w1, w0
> >         ldr     w4, [x3, x1, lsl 2]
> >         str     w6, [x5, x1, lsl 2]
> >         cmp     w0, w4
> >         bcc     .L4
> >         str     w0, [x3, x1, lsl 2]
> >         add     w2, w2, 3
> >         cmp     w7, 3
> >         beq     .L4
> >         sxtw    x1, w2
> >         add     w2, w2, w0
> >         ldr     w4, [x3, x1, lsl 2]
> >         str     w2, [x5, x1, lsl 2]
> >         cmp     w0, w4
> >         bcc     .L4
> >         str     w0, [x3, x1, lsl 2]
> > .L4:
> >         mov     w0, 0
> >         ret
> >         .p2align 2,,3
> > .L6:
> >         mov     w7, 4
> >         b       .L2
> >
> > and for SVE:
> >
> > test4:
> >         adrp    x2, .LANCHOR0
> >         add     x2, x2, :lo12:.LANCHOR0
> >         add     x5, x2, 3216
> >         mov     x3, 0
> >         mov     w1, 0
> >         cntw    x4
> >         mov     z1.s, w0
> >         index   z0.s, #0, #1
> >         ptrue   p1.b, all
> >         ptrue   p0.s, all
> >         .p2align 3,,7
> > .L3:
> >         ld1w    z2.s, p1/z, [x2, x3, lsl 2]
> >         add     z3.s, z0.s, z1.s
> >         cmplo   p2.s, p0/z, z1.s, z2.s
> >         b.any   .L2
> >         st1w    z3.s, p1, [x5, x3, lsl 2]
> >         add     w1, w1, 1
> >         st1w    z1.s, p1, [x2, x3, lsl 2]
> >         add     x3, x3, x4
> >         incw    z0.s
> >         cmp     w3, 803
> >         bls     .L3
> > .L5:
> >         mov     w0, 0
> >         ret
> >         .p2align 2,,3
> > .L2:
> >         cntw    x5
> >         mul     w1, w1, w5
> >         cbz     w5, .L5
> >         sxtw    x1, w1
> >         sub     w5, w5, #1
> >         add     x5, x5, x1
> >         add     x6, x2, 3216
> >         b       .L6
> >         .p2align 2,,3
> > .L14:
> >         str     w0, [x2, x1, lsl 2]
> >         cmp     x1, x5
> >         beq     .L5
> >         mov     x1, x4
> > .L6:
> >         ldr     w3, [x2, x1, lsl 2]
> >         add     w4, w0, w1
> >         str     w4, [x6, x1, lsl 2]
> >         add     x4, x1, 1
> >         cmp     w0, w3
> >         bcs     .L14
> >         mov     w0, 0
> >         ret
> >
> > On the workloads this work is based on we see between 2-3x performance
> > uplift using this patch.
> >
> > Follow up plan:
> >  - Boolean vectorization has several shortcomings.  I've filed PR110223 with
> the
> >    bigger ones that cause vectorization to fail with this patch.
> >  - SLP support.  This is planned for GCC 15 as for majority of the cases 
> > build
> >    SLP itself fails.
> 
> It would be nice to get at least single-lane SLP support working.  I think you
> need to treat the gcond as SLP root stmt and basically do discovery on the
> condition as to as if it were a mask generating condition.

Hmm ok, will give it  a try.

> 
> Code generation would then simply schedule the gcond root instances first
> (that would get you the code motion automagically).

Right, so you're saying treat the gcond's as the seed, and stores as a sink.
And then schedule only the instances without a gcond around such that we
can still vectorize in place to get the branches.  Ok, makes sense.

> 
> So, add a new slp_instance_kind, for example slp_inst_kind_early_break, and
> record the gcond as root stmt.  Possibly "pattern" recognizing
> 
>  gcond <_1 != _2>
> 
> as
> 
>  _mask = _1 != _2;
>  gcond <_mask != 0>
> 
> makes the SLP discovery less fiddly (but in theory you can of course handle
> gconds directly).
> 
> Is there any part of the series that can be pushed independelty?  If so I'll 
> try to
> look at those parts first.
> 

Aside from:

[PATCH 4/21]middle-end: update loop peeling code to maintain LCSSA form for 
early breaks
[PATCH 7/21]middle-end: update IV update code to support early breaks and 
arbitrary exits  

The rest lie dormant and don't do anything or disrupt the tree until those two 
are in.
The rest all just touch up different parts piecewise.

They do rely on the new field introduced in:

[PATCH 3/21]middle-end: Implement code motion and dependency analysis for early 
breaks

But can split them out.

I'll start respinning no #4 and #7 with your latest changes now.

Thanks,
Tamar

> Thanks,
> Richard.

Reply via email to