On June 4, 2019 6:50:07 PM GMT+02:00, Andrew MacLeod <amacl...@redhat.com> 
wrote:
>On 6/4/19 11:37 AM, Richard Biener wrote:
>> On Tue, Jun 4, 2019 at 5:26 PM Richard Biener
>> <richard.guent...@gmail.com> wrote:
>>>
>>> But you still have a reference to the range in evry BB dominated by
>the
>>> definition?
>> Btw, I was thinking of
>>
>> unsigned char foo(long);
>> void baz(unsigned char c)
>> {
>>    try{
>>        long i = c;
>>        i += foo(i);
>>        i += foo(i);
>>        i += foo(i);
>>        i += foo(i);
>> ... repeat N times ...
>>       alloca(i); // query range of i
>>    }
>>    catch (...)
>>      {
>>      }
>> }
>>
>>
>> where the single(!) query of the range of i on the alloca call
>> will populate N BBs caches with the Nth having ranges for
>> all SSA defs of i running into quadraticness in N.
>well, not really.   its still linear since the 'i' used in each foo() 
>will be dead, so the next block will contain no range for it
>
>
>   try{
>       long i_2 = _1;
>       i_3 += foo(i_2);
>       i_4 += foo(i_3);
>       i_5 += foo(i_4);
>       i_6 += foo(i_5);
>... repeat N times ...
>      alloca(i_6); // query range of i
>
>once i_2 is 'dead' after the first call to foo(), it will no longer be 
>in the on-entry cache to any other block.
>The walk back never see's i_2 until the first call to foo(), so none of

It's actually

   Utemp_2 = foo(i_1);

Bb:
   i_3 = i_1 + (long) utemp_2;
   Utemp_4 = foo(i_3);

Eliding the separate stmt for the conversion. From you description of the cache 
it sounded like getting incoming values is a take-all operation. Otherwise 
you'd need negative entries as well (I thought the cache might not contain 
varying ranges for example). 

>blocks after that have a need for its range, so it will not in their
>caches.
>
>The on-entry cache will only get a single range entry for each of those
>
>blocks.. the incoming value of i_x  and thats it. The implicitly known 
>live-on-exit attribute is handy here.
>
>Now, that's not to say we cant construct cases where there is a lot of 
>ranges being populated.. If we find the cache is really a problem, we 
>can start caching ranges.. so that if all these i_x's were live
>somehow, 
>there would only be one range for each in this case, and the cache's 
>would simply all point to the same range.

I suppose using i in the catch block would create sth like this, a phi with a 
large in degree and thus the switch case you already know about? 

>so far we havent really run across anything that appears to trigger 
>significant concerns.

Sure. It's still worth thinking about worst case behavior and how you'd deal 
with that given once ranger is actually used we will eventually run into these 
cases. When
A forward evrp walk is then faster than a single on demand query that would be 
concerning. 

Richard. 

>
>
>>
>> If you actually try you probably will have to find a persisting
>> stmt that some user of on-demand query looks for.
>> You converted some of the warning passes so choose
>> a stmt that triggers it from there.
>>
>> Note we've run into "interesting" testcases mostly from
>> machine generated code which we still want to be able
>> to optimize at -O1 at least with reasonable use of
>> time and memory (and quadraticnesses are to be avoided
>> like a plague - not that we don't have some).
>>
>>>>   I think thats the
>>>> most common value, so that should reduce a large number of them.  
>I've
>>>> also considering caching ranges like we cache tree constants... but
>I
>>>> haven't delved into that.  I figured if memory turns out to be a
>>>> problem, then we'll look at it then.
>>>>
>>>> Andrew
>>>>

Reply via email to