> On April 12, 2016 2:12:14 PM GMT+02:00, Jan Hubicka <hubi...@ucw.cz> wrote:
> >> Hello.
> >> 
> >> As release managers agreed on IRC, following patch reverts r234572
> >> which introduced new PR testsuite/70577.
> >> 
> >> I've been running bootstrap & regression tests on x86_64-linux-gnu.
> >> Ready to be installed after it finishes?
> >
> >OK, thanks for the revert. 
> >I looked into the prefetch testcase and I think it only needs updating
> >becasue
> >all arrays are trailing. I will try to debug the galgel issue.
> 
> I think we do not want prefetching for the trailing array of size 5.  This is 
> what the testcase tests.  I think we do want prefetching for a trailing array 
> of size 1000.  Thus I think the previous behavior of estimated niter was 
> better.

Just becasue array has size 1000000 we can't assume that the loop will most
likely 1000000 times.  This is what happens in the go solver that motivated me
for this patch: there are 10 nested loops where each iterates at most 10 times,
but most of time they iterate once or twice (10^10 is simply too much).
Setting estimated niter to 10 makes us to trade setup cost for iteration cost
way too much.

I see that the heuristics worked here bit by accident.  We can assume
trailing arrays to have size 0 or 1 of they aer really such and we can
work out how the slot is allocated and rely on object size, which we
don't at the moment (I think).

In order to enable loop peeling by default next stage1, I extended niter to
also collect likely upper bound. I use it in this path and also in some other
cases. For example I predict

int a[3];
for (i=0;cond;i++)
  if (cond2)
     a[i]=1;

to likely iterest at most 3 times which makes us to peel the loop 3 times but
keep the rolled loop as fallback.  That patch also sets upper bounds for those
trailing arrays of odd sizes.

The likely upper bounds can then often be used in places we use estimates and
upper bounds (such as to trottle down unrolling/vectorizing/ivopts) and to
drive peeling.

Honza
> 
> Richard.
> 
> >Honza
> 

Reply via email to