On Tue, Mar 20, 2018 at 2:36 PM, Richard Biener
<richard.guent...@gmail.com> wrote:
> On Mon, Mar 19, 2018 at 6:08 PM, Aldy Hernandez <al...@redhat.com> wrote:
>> Hi Richard.
>>
>> As discussed in the PR, the problem here is that we have two different
>> iterations of an IV live outside of a loop.  This inhibits us from using
>> autoinc/dec addressing on ARM, and causes extra lea's on x86.
>>
>> An abbreviated example is this:
>>
>> loop:
>>   # p_9 = PHI <p_17(2), p_20(3)>
>>   p_20 = p_9 + 18446744073709551615;
>> goto loop
>>   p_24 = p_9 + 18446744073709551614;
>>   MEM[(char *)p_20 + -1B] = 45;
>>
>> Here we have both the previous IV (p_9) and the current IV (p_20) used
>> outside of the loop.  On Arm this keeps us from using auto-dec addressing,
>> because one use is -2 and the other one is -1.
>>
>> With the attached patch we attempt to rewrite out-of-loop uses of the IV in
>> terms of the current/last IV (p_20 in the case above).  With it, we end up
>> with:
>>
>>   p_24 = p_20 + 18446744073709551615;
>>   *p_24 = 45;
>>
>> ...which helps both x86 and Arm.
>>
>> As you have suggested in comment 38 on the PR, I handle specially
>> out-of-loop IV uses of the form IV+CST and propagate those accordingly
>> (along with the MEM_REF above).  Otherwise, in less specific cases, we un-do
>> the IV increment, and use that value in all out-of-loop uses.  For instance,
>> in the attached testcase, we rewrite:
>>
>>   george (p_9);
>>
>> into
>>
>>   _26 = p_20 + 1;
>>   ...
>>   george (_26);
>>
>> The attached testcase tests the IV+CST specific case, as well as the more
>> generic case with george().
>>
>> Although the original PR was for ARM, this behavior can be noticed on x86,
>> so I tested on x86 with a full bootstrap + tests.  I also ran the specific
>> test on an x86 cross ARM build and made sure we had 2 auto-dec with the
>> test.  For the original test (slightly different than the testcase in this
>> patch), with this patch we are at 104 bytes versus 116 without it.  There is
>> still the issue of a division optimization which would further reduce the
>> code size.  I will discuss this separately as it is independent from this
>> patch.
>>
>> Oh yeah, we could make this more generic, and maybe handle any multiple of
>> the constant, or perhaps *= and /=.  Perhaps something for next stage1...
>>
>> OK for trunk?
>
> Sorry for noticing so late but you use loop properties like ->latch and
> ->loop_father (via flow_bb_inside_loop_p) that may not be valid at
> this point given expand doesn't do loop_optimizer_init/finalize.  Generally
> you may face loops with multiple latches (->latch == NULL) here and
> loops may have multiple exits.  You probably are not hitting any
> of those problems because is_iv_plus_constant is quite restrictive
> (doesn't handle PLUS_EXPR or MINUS_EXPR).
>
> Also the latch doesn't need to be the block with the exit conditional.
>
> So first of all you'd need to wrap insert_backedge_copies () with
>
>  loop_optimizer_init (AVOID_CFG_MODIFICATIONS);
> ...
>  loop_optimizer_finalize ();
>
> that is required to fixup an eventually broken state.  Then you
> probably should restrict handling to PHIs in BBs with
> bb->loop_father->header == bb (aka loop header PHIs),
> use get_loop_exit_edges when the IV increment is of OK form
> and then use dominator info to query in which exit block to
> insert compensation (or simply refuse to do anything for
> loops with multiple exits).
>
> So if you have more cycles to work on this that would be nice,
> otherwise I can certainly take over from here...

Ok, so I've finally taken a closer look.  There's a thing that goes
wrong unpatched with the current insert_backedge_copies code.
It does detect the coalescing conflict but instead of solving it
in a way so the backedge copy can be coalesced it inserts a
copy on that edge (actually before the exit test).
It could have done better by inserting
a copy before the IV increment and adjusting out-of-loop uses
to use that.  There's also another IV the existing code triggers on
un-helpfully inserting a copy between two uses (exit test and
producer of the backedge value) instead of sinking the producer
after the last use (splitting the latch edge) (doesn't help on x86
where we combine div and mod later, recreating the conflict).

Iff there were not similar increments to be adjusted outside of
the loop with our approach inserting the copy inside the loop
would have been the better choice I guess.  So on x86 I see
with the loop copy

.L2:
        movl    %ecx, %eax
        xorl    %edx, %edx
        movq    %rbx, %rbp   <--- pre-inc copy
        decq    %rbx
        divl    %esi
        addl    $48, %edx
        movb    %dl, (%rbx)
        cmpl    $9, %ecx
        jbe     .L7
        movl    %eax, %ecx
        jmp     .L2
.L7:
        testl   %edi, %edi
        jns     .L1
        movq    %rbp, %rdi
        call    george
        movb    $45, -1(%rbx)
        leaq    -2(%rbp), %rbx
.L1:
        movq    %rbx, %rax

vs. sth like the original fix

.L2:
        movl    %ecx, %eax
        xorl    %edx, %edx
        decq    %rbx
        divl    %esi
        addl    $48, %edx
        movb    %dl, (%rbx)
        cmpl    $9, %ecx
        jbe     .L7
        movl    %eax, %ecx
        jmp     .L2
.L7:
        testl   %edi, %edi
        jns     .L1
        leaq    1(%rbx), %rdi
        decq    %rbx
        call    george
        movb    $45, (%rbx)
.L1:
        movq    %rbx, %rax

where the copy inside the loop could have been smaller
than N adjustments outside of it.

Just to say the existing code needs improvement as well
here.  Maybe Micha remembers what exactly the idea
was with inserting the copy (kind-of) on the backedge.

I've sofar simplified the patch further (also implementing
the copy-inside-loop variant, just not sure when we'd
want to use that).

Richard.

> Thanks,
> Richard.
>
>> Aldy

Reply via email to