> -----Original Message----- > From: Tamar Christina <tamar.christ...@arm.com> > Sent: Wednesday, September 29, 2021 5:20 PM > To: gcc-patches@gcc.gnu.org > Cc: nd <n...@arm.com>; Richard Earnshaw <richard.earns...@arm.com>; > Marcus Shawcroft <marcus.shawcr...@arm.com>; Kyrylo Tkachov > <kyrylo.tkac...@arm.com>; Richard Sandiford > <richard.sandif...@arm.com> > Subject: [PATCH 2/7]AArch64 Add combine patterns for narrowing shift of > half top bits (shuffle) > > Hi All, > > When doing a (narrowing) right shift by half the width of the original type > then > we are essentially shuffling the top bits from the first number down. > > If we have a hi/lo pair we can just use a single shuffle instead of needing > two > shifts. > > i.e. > > typedef short int16_t; > typedef unsigned short uint16_t; > > void foo (uint16_t * restrict a, int16_t * restrict d, int n) > { > for( int i = 0; i < n; i++ ) > d[i] = (a[i] * a[i]) >> 16; > } > > now generates: > > .L4: > ldr q0, [x0, x3] > umull v1.4s, v0.4h, v0.4h > umull2 v0.4s, v0.8h, v0.8h > uzp2 v0.8h, v1.8h, v0.8h > str q0, [x1, x3] > add x3, x3, 16 > cmp x4, x3 > bne .L4 > > instead of > > .L4: > ldr q0, [x0, x3] > umull v1.4s, v0.4h, v0.4h > umull2 v0.4s, v0.8h, v0.8h > sshr v1.4s, v1.4s, 16 > sshr v0.4s, v0.4s, 16 > xtn v1.4h, v1.4s > xtn2 v1.8h, v0.4s > str q1, [x1, x3] > add x3, x3, 16 > cmp x4, x3 > bne .L4 > > Bootstrapped Regtested on aarch64-none-linux-gnu and no issues. > > Ok for master? >
Ok. Thanks, Kyrill > Thanks, > Tamar > > gcc/ChangeLog: > > * config/aarch64/aarch64-simd.md > (*aarch64_<srn_op>topbits_shuffle<mode>, > *aarch64_topbits_shuffle<mode>): New. > * config/aarch64/predicates.md > (aarch64_simd_shift_imm_vec_exact_top): New. > > gcc/testsuite/ChangeLog: > > * gcc.target/aarch64/shrn-combine-2.c: New test. > * gcc.target/aarch64/shrn-combine-3.c: New test. > > --- inline copy of patch -- > diff --git a/gcc/config/aarch64/aarch64-simd.md > b/gcc/config/aarch64/aarch64-simd.md > index > d7b6cae424622d259f97a3d5fa9093c0fb0bd5ce..300bf001b59ca7fa197c580b > 10adb7f70f20d1e0 100644 > --- a/gcc/config/aarch64/aarch64-simd.md > +++ b/gcc/config/aarch64/aarch64-simd.md > @@ -1840,6 +1840,36 @@ (define_insn > "*aarch64_<srn_op>shrn<mode>2_vect" > [(set_attr "type" "neon_shift_imm_narrow_q")] > ) > > +(define_insn "*aarch64_<srn_op>topbits_shuffle<mode>" > + [(set (match_operand:<VNARROWQ2> 0 "register_operand" "=w") > + (vec_concat:<VNARROWQ2> > + (truncate:<VNARROWQ> > + (SHIFTRT:VQN (match_operand:VQN 1 "register_operand" "w") > + (match_operand:VQN 2 > "aarch64_simd_shift_imm_vec_exact_top"))) > + (truncate:<VNARROWQ> > + (SHIFTRT:VQN (match_operand:VQN 3 "register_operand" "w") > + (match_dup 2)))))] > + "TARGET_SIMD" > + "uzp2\\t%0.<V2ntype>, %1.<V2ntype>, %3.<V2ntype>" > + [(set_attr "type" "neon_permute<q>")] > +) > + > +(define_insn "*aarch64_topbits_shuffle<mode>" > + [(set (match_operand:<VNARROWQ2> 0 "register_operand" "=w") > + (vec_concat:<VNARROWQ2> > + (unspec:<VNARROWQ> [ > + (match_operand:VQN 1 "register_operand" "w") > + (match_operand:VQN 2 > "aarch64_simd_shift_imm_vec_exact_top") > + ] UNSPEC_RSHRN) > + (unspec:<VNARROWQ> [ > + (match_operand:VQN 3 "register_operand" "w") > + (match_dup 2) > + ] UNSPEC_RSHRN)))] > + "TARGET_SIMD" > + "uzp2\\t%0.<V2ntype>, %1.<V2ntype>, %3.<V2ntype>" > + [(set_attr "type" "neon_permute<q>")] > +) > + > (define_expand "aarch64_shrn<mode>" > [(set (match_operand:<VNARROWQ> 0 "register_operand") > (truncate:<VNARROWQ> > diff --git a/gcc/config/aarch64/predicates.md > b/gcc/config/aarch64/predicates.md > index > 49f02ae0381359174fed80c2a2264295c75bc189..7fd4f9e7d06d3082d6f30472 > 90f0446789e1d0d2 100644 > --- a/gcc/config/aarch64/predicates.md > +++ b/gcc/config/aarch64/predicates.md > @@ -545,6 +545,12 @@ (define_predicate > "aarch64_simd_shift_imm_offset_di" > (and (match_code "const_int") > (match_test "IN_RANGE (INTVAL (op), 1, 64)"))) > > +(define_predicate "aarch64_simd_shift_imm_vec_exact_top" > + (and (match_code "const_vector") > + (match_test "aarch64_const_vec_all_same_in_range_p (op, > + GET_MODE_UNIT_BITSIZE (GET_MODE (op)) / 2, > + GET_MODE_UNIT_BITSIZE (GET_MODE (op)) / 2)"))) > + > (define_predicate "aarch64_simd_shift_imm_vec_qi" > (and (match_code "const_vector") > (match_test "aarch64_const_vec_all_same_in_range_p (op, 1, 8)"))) > diff --git a/gcc/testsuite/gcc.target/aarch64/shrn-combine-2.c > b/gcc/testsuite/gcc.target/aarch64/shrn-combine-2.c > new file mode 100644 > index > 0000000000000000000000000000000000000000..924b3b849e449082b8c0b7 > dc6b955a2bad8d0911 > --- /dev/null > +++ b/gcc/testsuite/gcc.target/aarch64/shrn-combine-2.c > @@ -0,0 +1,15 @@ > +/* { dg-do assemble } */ > +/* { dg-options "-O3 --save-temps --param=vect-epilogues-nomask=0" } */ > + > +typedef short int16_t; > +typedef unsigned short uint16_t; > + > +void foo (uint16_t * restrict a, int16_t * restrict d, int n) > +{ > + for( int i = 0; i < n; i++ ) > + d[i] = (a[i] * a[i]) >> 16; > +} > + > +/* { dg-final { scan-assembler-times {\tuzp2\t} 1 } } */ > +/* { dg-final { scan-assembler-not {\tshrn\t} } } */ > +/* { dg-final { scan-assembler-not {\tshrn2\t} } } */ > diff --git a/gcc/testsuite/gcc.target/aarch64/shrn-combine-3.c > b/gcc/testsuite/gcc.target/aarch64/shrn-combine-3.c > new file mode 100644 > index > 0000000000000000000000000000000000000000..929a55c5c338844e6a5c5ad > 249af482286ab9c61 > --- /dev/null > +++ b/gcc/testsuite/gcc.target/aarch64/shrn-combine-3.c > @@ -0,0 +1,14 @@ > +/* { dg-do assemble } */ > +/* { dg-options "-O3 --save-temps --param=vect-epilogues-nomask=0" } */ > + > + > +#include <arm_neon.h> > + > +uint16x8_t foo (uint32x4_t a, uint32x4_t b) > +{ > + return vrshrn_high_n_u32 (vrshrn_n_u32 (a, 16), b, 16); > +} > + > +/* { dg-final { scan-assembler-times {\tuzp2\t} 1 } } */ > +/* { dg-final { scan-assembler-not {\tshrn\t} } } */ > +/* { dg-final { scan-assembler-not {\tshrn2\t} } } */ > > > --