On Thu, Aug 5, 2021 at 11:53 AM Richard Biener <rguent...@suse.de> wrote:
> On Thu, 5 Aug 2021, Christophe Lyon wrote: > > > On Wed, Aug 4, 2021 at 2:08 PM Richard Biener <rguent...@suse.de> wrote: > > > > > On Wed, 4 Aug 2021, Richard Sandiford wrote: > > > > > > > Richard Biener <rguent...@suse.de> writes: > > > > > This adds a gather vectorization capability to the vectorizer > > > > > without target support by decomposing the offset vector, doing > > > > > sclar loads and then building a vector from the result. This > > > > > is aimed mainly at cases where vectorizing the rest of the loop > > > > > offsets the cost of vectorizing the gather. > > > > > > > > > > Note it's difficult to avoid vectorizing the offset load, but in > > > > > some cases later passes can turn the vector load + extract into > > > > > scalar loads, see the followup patch. > > > > > > > > > > On SPEC CPU 2017 510.parest_r this improves runtime from 250s > > > > > to 219s on a Zen2 CPU which has its native gather instructions > > > > > disabled (using those the runtime instead increases to 254s) > > > > > using -Ofast -march=znver2 [-flto]. It turns out the critical > > > > > loops in this benchmark all perform gather operations. > > > > > > > > > > Bootstrapped and tested on x86_64-unknown-linux-gnu. > > > > > > > > > > 2021-07-30 Richard Biener <rguent...@suse.de> > > > > > > > > > > * tree-vect-data-refs.c (vect_check_gather_scatter): > > > > > Include widening conversions only when the result is > > > > > still handed by native gather or the current offset > > > > > size not already matches the data size. > > > > > Also succeed analysis in case there's no native support, > > > > > noted by a IFN_LAST ifn and a NULL decl. > > > > > (vect_analyze_data_refs): Always consider gathers. > > > > > * tree-vect-patterns.c (vect_recog_gather_scatter_pattern): > > > > > Test for no IFN gather rather than decl gather. > > > > > * tree-vect-stmts.c (vect_model_load_cost): Pass in the > > > > > gather-scatter info and cost emulated gathers accordingly. > > > > > (vect_truncate_gather_scatter_offset): Properly test for > > > > > no IFN gather. > > > > > (vect_use_strided_gather_scatters_p): Likewise. > > > > > (get_load_store_type): Handle emulated gathers and its > > > > > restrictions. > > > > > (vectorizable_load): Likewise. Emulate them by extracting > > > > > scalar offsets, doing scalar loads and a vector construct. > > > > > > > > > > * gcc.target/i386/vect-gather-1.c: New testcase. > > > > > * gfortran.dg/vect/vect-8.f90: Adjust. > > > > > > > Hi, > > > > The adjusted testcase now fails on aarch64: > > FAIL: gfortran.dg/vect/vect-8.f90 -O scan-tree-dump-times vect > > "vectorized 23 loops" 1 > > That likely means it needs adjustment for the aarch64 case as well > which I didn't touch. I suppose it's now vectorizing 24 loops? > And 24 with SVE as well, so we might be able to merge the > aarch64_sve and aarch64 && ! aarch64_sve cases? > > Like with > > diff --git a/gcc/testsuite/gfortran.dg/vect/vect-8.f90 > b/gcc/testsuite/gfortran.dg/vect/vect-8.f90 > index cc1aebfbd84..c8a7d896bac 100644 > --- a/gcc/testsuite/gfortran.dg/vect/vect-8.f90 > +++ b/gcc/testsuite/gfortran.dg/vect/vect-8.f90 > @@ -704,7 +704,6 @@ CALL track('KERNEL ') > RETURN > END SUBROUTINE kernel > > -! { dg-final { scan-tree-dump-times "vectorized 24 loops" 1 "vect" { > target aarch64_sve } } } > -! { dg-final { scan-tree-dump-times "vectorized 23 loops" 1 "vect" { > target { aarch64*-*-* && { ! aarch64_sve } } } } } > +! { dg-final { scan-tree-dump-times "vectorized 24 loops" 1 "vect" { > target aarch64*-*-* } } } > ! { dg-final { scan-tree-dump-times "vectorized 2\[234\] loops" 1 "vect" > { target { vect_intdouble_cvt && { ! aarch64*-*-* } } } } } > ! { dg-final { scan-tree-dump-times "vectorized 17 loops" 1 "vect" { > target { { ! vect_intdouble_cvt } && { ! aarch64*-*-* } } } } } > > f951 vect.exp testing with and without -march=armv8.3-a+sve shows > this might work, but if you can double-check that would be nice. > > Indeed LGTM, thanks > Richard. >