On Wed, 16 Jun 2021 at 15:49, Prathamesh Kulkarni <prathamesh.kulka...@linaro.org> wrote: > > On Mon, 14 Jun 2021 at 16:15, Kyrylo Tkachov <kyrylo.tkac...@arm.com> wrote: > > > > > > > > > -----Original Message----- > > > From: Prathamesh Kulkarni <prathamesh.kulka...@linaro.org> > > > Sent: 14 June 2021 08:58 > > > To: gcc Patches <gcc-patches@gcc.gnu.org>; Kyrylo Tkachov > > > <kyrylo.tkac...@arm.com> > > > Subject: Re: [ARM] PR97906 - Missed lowering abs(a) >= abs(b) to vacge > > > > > > On Mon, 7 Jun 2021 at 12:46, Prathamesh Kulkarni > > > <prathamesh.kulka...@linaro.org> wrote: > > > > > > > > On Tue, 1 Jun 2021 at 16:03, Prathamesh Kulkarni > > > > <prathamesh.kulka...@linaro.org> wrote: > > > > > > > > > > Hi, > > > > > As mentioned in PR, for following test-case: > > > > > > > > > > #include <arm_neon.h> > > > > > > > > > > uint32x2_t f1(float32x2_t a, float32x2_t b) > > > > > { > > > > > return vabs_f32 (a) >= vabs_f32 (b); > > > > > } > > > > > > > > > > uint32x2_t f2(float32x2_t a, float32x2_t b) > > > > > { > > > > > return (uint32x2_t) __builtin_neon_vcagev2sf (a, b); > > > > > } > > > > > > > > > > We generate vacge for f2, but with -ffast-math, we generate following > > > for f1: > > > > > f1: > > > > > vabs.f32 d1, d1 > > > > > vabs.f32 d0, d0 > > > > > vcge.f32 d0, d0, d1 > > > > > bx lr > > > > > > > > > > This happens because, the middle-end inverts the comparison to b <= a, > > > > > .optimized dump: > > > > > _8 = __builtin_neon_vabsv2sf (a_4(D)); > > > > > _7 = __builtin_neon_vabsv2sf (b_5(D)); > > > > > _1 = _7 <= _8; > > > > > _2 = VIEW_CONVERT_EXPR<vector(2) int>(_1); > > > > > _6 = VIEW_CONVERT_EXPR<uint32x2_t>(_2); > > > > > return _6; > > > > > > > > > > and combine fails to match the following pattern: > > > > > (set (reg:V2SI 121) > > > > > (neg:V2SI (le:V2SI (abs:V2SF (reg:V2SF 123)) > > > > > (abs:V2SF (reg:V2SF 122))))) > > > > > > > > > > because neon_vca<cmp_op><mode> pattern has GTGE code iterator. > > > > > The attached patch adjusts the neon_vca patterns to use GLTE instead > > > > > similar to neon_vca<cmp_op><mode>_fp16insn, and removes > > > NEON_VACMP iterator. > > > > > Code-gen with patch: > > > > > f1: > > > > > vacle.f32 d0, d1, d0 > > > > > bx lr > > > > > > > > > > Bootstrapped + tested on arm-linux-gnueabihf and cross-tested on arm*- > > > *-*. > > > > > OK to commit ? > > > > Is that inversion guaranteed to happen (is it a canonicalization rule)? > I think it follows the following rule for canonicalization from > tree_swap_operands_p: > /* It is preferable to swap two SSA_NAME to ensure a canonical form > for commutative and comparison operators. Ensuring a canonical > form allows the optimizers to find additional redundancies without > having to explicitly check for both orderings. */ > if (TREE_CODE (arg0) == SSA_NAME > && TREE_CODE (arg1) == SSA_NAME > && SSA_NAME_VERSION (arg0) > SSA_NAME_VERSION (arg1)) > return 1; > > For the above test-case, it's ccp1 that inverts the comparison. > The input to ccp1 pass is: > _12 = __builtin_neon_vabsv2sf (a_6(D)); > _14 = _12; > _1 = _14; > _11 = __builtin_neon_vabsv2sf (b_8(D)); > _16 = _11; > _2 = _16; > _3 = _1 >= _2; > _4 = VEC_COND_EXPR <_3, { -1, -1 }, { 0, 0 }>; > _10 = VIEW_CONVERT_EXPR<uint32x2_t>(_4); > return _10; > > _3 = _1 >= _2 is folded into: > _3 = _12 >= _11 > > Since _12 is higher ssa version than _11, it is canonicalized to: > _3 = _11 <= _12. > Hi Kyrill, Is it OK to push given the above canonicalization ?
Thanks, Prathamesh > Thanks, > Prathamesh > > If so, ok. > > Thanks, > > Kyrill > > > > > > > > > > Thanks, > > > Prathamesh > > > > > > > > Thanks, > > > > Prathamesh > > > > > > > > > > Thanks, > > > > > Prathamesh