On Fri, Jun 16, 2017 at 3:06 PM, Bin.Cheng <amker.ch...@gmail.com> wrote:
> On Fri, Jun 16, 2017 at 11:49 AM, Richard Biener
> <richard.guent...@gmail.com> wrote:
>> On Wed, Jun 14, 2017 at 3:07 PM, Bin Cheng <bin.ch...@arm.com> wrote:
>>> Hi,
>>> Loop split forces intermediate computation to gimple operands all the time 
>>> when
>>> computing bound information.  This is not good since folding opportunities 
>>> are
>>> missed.  This patch fixes the issue by feeding all computation to folder 
>>> and only
>>> forcing to gimple operand at last.
>>>
>>> Bootstrap and test on x86_64 and AArch64.  Is it OK?
>>
>> Hm?  It uses gimple_build () which should do the same as fold_buildN in terms
>> of simplification.
>>
>> So where does that not work?  It is supposed to be the prefered way and no
>> new code should use force_gimple_operand (unless dealing with generic
>> coming from other middle-end infrastructure like SCEV or niter analysis)
> Hmm, current code calls force_gimpele operand several times which
> causes the inefficiency.  The patch avoids that and does one call at
> the end.

But it forces to the same sequence that is used for extending the expression
so folding should work.  Where do you see that it does not?  Note the
code uses gimple_build (), not gimple_build_assign ().

Richard.

> Thanks,
> bin
>>
>> Richard.
>>
>>>
>>> Thanks,
>>> bin
>>> 2017-06-12  Bin Cheng  <bin.ch...@arm.com>
>>>
>>>         * tree-ssa-loop-split.c (compute_new_first_bound): Feed bound
>>>         computation to folder, rather than force to gimple operands too
>>>         early.

Reply via email to