> Pengxuan Zheng <quic_pzh...@quicinc.com> writes: > > This patch improves GCC’s vectorization of __builtin_popcount for > > aarch64 target by adding popcount patterns for vector modes besides > > QImode, i.e., HImode, SImode and DImode. > > > > With this patch, we now generate the following for HImode: > > cnt v1.16b, v.16b > > uaddlp v2.8h, v1.16b > > > > For SImode, we generate: > > cnt v1.16b, v.16b > > uaddlp v2.8h, v1.16b > > uaddlp v3.4s, v2.8h > > > > For V2DI, we generate: > > cnt v1.16b, v.16b > > uaddlp v2.8h, v1.16b > > uaddlp v3.4s, v2.8h > > uaddlp v4.2d, v3.4s > > > > gcc/ChangeLog: > > > > PR target/113859 > > * config/aarch64/aarch64-simd.md (popcount<mode>2): New > define_expand. > > > > gcc/testsuite/ChangeLog: > > > > PR target/113859 > > * gcc.target/aarch64/popcnt-vec.c: New test. > > > > Signed-off-by: Pengxuan Zheng <quic_pzh...@quicinc.com> > > --- > > gcc/config/aarch64/aarch64-simd.md | 40 ++++++++++++++++ > > gcc/testsuite/gcc.target/aarch64/popcnt-vec.c | 48 > > +++++++++++++++++++ > > 2 files changed, 88 insertions(+) > > create mode 100644 gcc/testsuite/gcc.target/aarch64/popcnt-vec.c > > > > diff --git a/gcc/config/aarch64/aarch64-simd.md > > b/gcc/config/aarch64/aarch64-simd.md > > index f8bb973a278..093c32ee8ff 100644 > > --- a/gcc/config/aarch64/aarch64-simd.md > > +++ b/gcc/config/aarch64/aarch64-simd.md > > @@ -3540,6 +3540,46 @@ (define_insn > "popcount<mode>2<vczle><vczbe>" > > [(set_attr "type" "neon_cnt<q>")] > > ) > > > > +(define_expand "popcount<mode>2" > > + [(set (match_operand:VQN 0 "register_operand" "=w") > > + (popcount:VQN (match_operand:VQN 1 "register_operand" "w")))] > > + "TARGET_SIMD" > > + { > > + rtx v = gen_reg_rtx (V16QImode); > > + rtx v1 = gen_reg_rtx (V16QImode); > > + emit_move_insn (v, gen_lowpart (V16QImode, operands[1])); > > + emit_insn (gen_popcountv16qi2 (v1, v)); > > + if (<MODE>mode == V8HImode) > > + { > > + /* For V8HI, we generate: > > + cnt v1.16b, v.16b > > + uaddlp v2.8h, v1.16b */ > > + emit_insn (gen_aarch64_uaddlpv16qi (operands[0], v1)); > > + DONE; > > + } > > + rtx v2 = gen_reg_rtx (V8HImode); > > + emit_insn (gen_aarch64_uaddlpv16qi (v2, v1)); > > + if (<MODE>mode == V4SImode) > > + { > > + /* For V4SI, we generate: > > + cnt v1.16b, v.16b > > + uaddlp v2.8h, v1.16b > > + uaddlp v3.4s, v2.8h */ > > + emit_insn (gen_aarch64_uaddlpv8hi (operands[0], v2)); > > + DONE; > > + } > > + /* For V2DI, we generate: > > + cnt v1.16b, v.16b > > + uaddlp v2.8h, v1.16b > > + uaddlp v3.4s, v2.8h > > + uaddlp v4.2d, v3.4s */ > > + rtx v3 = gen_reg_rtx (V4SImode); > > + emit_insn (gen_aarch64_uaddlpv8hi (v3, v2)); > > + emit_insn (gen_aarch64_uaddlpv4si (operands[0], v3)); > > + DONE; > > + } > > +) > > + > > Could you add support for V4HI and V2SI at the same time?
Yes, Richard, and thanks a lot for the example consolidating the handling of all 5 modes. Here's the updated patch along with added tests covering V4HI and V2SI. https://gcc.gnu.org/pipermail/gcc-patches/2024-June/654429.html Thanks, Pengxuan > > I think it's possible to handle all 5 modes iteratively, like so: > > (define_expand "popcount<mode>2" > [(set (match_operand:VDQHSD 0 "register_operand") > (popcount:VDQHSD (match_operand:VDQHSD 1 "register_operand")))] > "TARGET_SIMD" > { > /* Generate a byte popcount. */ > machine_mode mode = <bitsize> == 64 ? V8QImode : V16QImode; > rtx tmp = gen_reg_rtx (mode); > auto icode = optab_handler (popcount_optab, mode); > emit_insn (GEN_FCN (icode) (tmp, gen_lowpart (mode, operands[1]))); > > /* Use a sequence of UADDLPs to accumulate the counts. Each step doubles > the element size and halves the number of elements. */ > do > { > auto icode = code_for_aarch64_addlp (ZERO_EXTEND, GET_MODE (tmp)); > mode = insn_data[icode].operand[0].mode; > rtx dest = mode == <MODE>mode ? operands[0] : gen_reg_rtx (mode); > emit_insn (GEN_FCN (icode) (dest, tmp)); > tmp = dest; > } > while (mode != <MODE>mode); > DONE; > }) > > (only lightly tested). This requires changing: > > (define_expand "aarch64_<su>addlp<mode>" > > to: > > (define_expand "@aarch64_<su>addlp<mode>" > > Thanks, > Richard > > > ;; 'across lanes' max and min ops. > > > > ;; Template for outputting a scalar, so we can create __builtins > > which can be diff --git > > a/gcc/testsuite/gcc.target/aarch64/popcnt-vec.c > > b/gcc/testsuite/gcc.target/aarch64/popcnt-vec.c > > new file mode 100644 > > index 00000000000..4c9a1b95990 > > --- /dev/null > > +++ b/gcc/testsuite/gcc.target/aarch64/popcnt-vec.c > > @@ -0,0 +1,48 @@ > > +/* { dg-do compile } */ > > +/* { dg-options "-O2" } */ > > + > > +/* This function should produce cnt v.16b. */ void bar (unsigned char > > +*__restrict b, unsigned char *__restrict d) { > > + for (int i = 0; i < 1024; i++) > > + d[i] = __builtin_popcount (b[i]); } > > + > > +/* This function should produce cnt v.16b and uaddlp (Add Long > > +Pairwise). */ void > > +bar1 (unsigned short *__restrict b, unsigned short *__restrict d) { > > + for (int i = 0; i < 1024; i++) > > + d[i] = __builtin_popcount (b[i]); } > > + > > +/* This function should produce cnt v.16b and 2 uaddlp (Add Long > > +Pairwise). */ void > > +bar2 (unsigned int *__restrict b, unsigned int *__restrict d) { > > + for (int i = 0; i < 1024; i++) > > + d[i] = __builtin_popcount (b[i]); } > > + > > +/* This function should produce cnt v.16b and 3 uaddlp (Add Long > > +Pairwise). */ void > > +bar3 (unsigned long long *__restrict b, unsigned long long > > +*__restrict d) { > > + for (int i = 0; i < 1024; i++) > > + d[i] = __builtin_popcountll (b[i]); } > > + > > +/* SLP > > + This function should produce cnt v.16b and 3 uaddlp (Add Long > > +Pairwise). */ void > > +bar4 (unsigned long long *__restrict b, unsigned long long > > +*__restrict d) { > > + d[0] = __builtin_popcountll (b[0]); > > + d[1] = __builtin_popcountll (b[1]); } > > + > > +/* { dg-final { scan-assembler-not {\tbl\tpopcount} } } */ > > +/* { dg-final { scan-assembler-times {cnt\t} 5 } } */ > > +/* { dg-final { scan-assembler-times {uaddlp\t} 9 } } */ > > +/* { dg-final { scan-assembler-times {ldr\tq} 5 } } */