On Saturday 01 September 2001 05:07 pm, Brent Dax wrote:
> Of course, the hard part is detecting when the optimization is invalid.
> While there are simple situations:
>
>       sub FOO {"foo"}
>
>       print FOO;
>
> evaluating to:
>
>                             /-no------"foo"-----\
>       opt: FOO redefined? -<                     >---print
>                             \-yes-----call FOO--/
>
> there could also be some more complicated situations, in which the
> situations where the optimizations are invalid are harder to define.

Granted, code can always be written more complexly than our ability to 
understand it, in which case we very well may throw out the warnings that 
Thou Shall Not (if thou wants consistant, accurate results).

But for many optimizations, although perhaps more with peephole 
optimizations than with full code analysis type of optimizations, simply 
identifying a possible optimization usually identifies its potential undoing 
as well.  

After all, optimizations don't just happen.  They are, more or less, a set 
of known patterns that you look for.  For a dynamic language, part of the 
original identification of those patterns may very well be the additional 
identification of what pieces are critical to the optimization.

Of course, with as *highly* a dynamic language as Perl, there may be several 
hundred things that could invalidate a given optimization - it would be less 
optimal to detect all those things that to simply run the unoptimized code!  

But in many cases, it may only be one or two.  

For instance, optimization within (or of) an object's methods could very 
well be dependent solely on whether that object was ever redefined.  You 
could then implement something akin to a dirty flag simply to check whether 
it had been updated, and if so, deoptimize.

Of course, what changes may have been made wouldn't necessarily negate the 
original optimization - but in our case, we can be conservative with it 
because is most cases the object may not be updated at all.  80/10 vs 90/50 
[1].

Another example that comes up often with multiple fetches and stores on 
variables is tying and overloading.  If a variable were non-magical, 
multiple retrievals and stores could potentially be reordered and combined. 
However, if it were tied, the explicit operations coded could be critical.  
(Then again, maybe not.  But we have to assume that they are.)  Our logic 
for this particular form of optimization can check to see if we know the 
variable is tied.  If it is, we may as well move on, (even if, during 
runtime, the variable in question is *never* tied when it gets to this area 
of the code).  But if we determine that it isn't, we know that someone could 
play symtable tricks on us and replace it with one that is.  So we could 
check to see if the symbol table had changed, if the symbol table for that 
package had changed (or pseudo package, if it were lexically scoped), or if 
the symtable entry for the variable had changed - whichever had the best 
longterm results (in terms of the speed trade-off for doing the optimization 
versus a false hit to deoptimize).  Or we could just say we won't optimize 
it, or we can say that we will - caveat scriptor!

Now, it could very well be that very few agressive optimizations will result 
in such a small set of code that could be optimized like this, in which case 
it's not worth it.  This is Perl, after all, and TMTOWTDI.  You could write 
the Perl code itself to differentiate between the cases, and force 
non-optimal code when it wasn't safe.  This is precisely what Memoization 
does.  You could optimize code at the opcode level and memoize it there, but 
there's so much that could change.  So you implement it yourself it perl 
space.

{
    local $OPTIMIZATIONS = not defined tied $thingy;
    # A bunch of code that could be optimized.
    # If thingy is tied, non-optimal code is forced.
}    

-- 
Bryan C. Warnock
[EMAIL PROTECTED]

Reply via email to