On Sunday 02 September 2001 12:21 pm, Ken Fox wrote:
> The idea of inserting "optimization ops" seems silly too. It would
> be faster to just replace the definition of the sub whenever the
> sub changes. Perl 6 subs are just as fast as ops.
Except not every optimization is 'sub' oriented. Optimizations within a sub
that are dependent on the sub itself not being redefined need no such magic,
because the very act of redefinition blows away the optimizations.
Optimization within a sub is also unintersting - either a new sub will be
compiled to replace it (in which case, it may have its own optimizations
builtin, and what do I on the outside care about that?, or there will be
some symbol table magic which effectively does the same thing.
>
> The optimizer can't optimize what it doesn't know. You're
> asking for some sort of "Quantum::SuperPosition::Optimizer" that
> stores all optimization states at once.
No. We're optimizing on what we know at the time. If what we know changes,
(or, more accurately, if we can determine that what we knew has changed),
then we return to our original state (and, presumably, see if the new
information leads to a new answer.)
The premise is you only have two states - the original, unoptimized code -
and the code that is optimized for the current state. If the state changes,
then you replace the previously optimized code with newly optimized code (if
you wished), or simply revert back to the original.
In the case where there are no dependencies, it's just a simple
optimization, then you just do it.
>
> The only way we're going to optimize Perl is by detecting dynamic
> re-definitions. We also need to eliminate re-definitions of
> compiled code. If all modules are loaded as source (not bytecode or
> native code), then re-definition is fine. IMHO it is too difficult
> to build a compiler that records its' optimization assumptions and
> attempts to verify that the semantics of a re-definition are
> consistent with the previous assumptions. (Hmm, maybe the optimizer
> could record unoptimized definitions? Those would be safe to
> replace. Dan?)
I think the only way your going to be able to detect dynamic redefinitions
is dynamically. :-)
I don't agree with eliminating re-definitions of compiled code (presumable
"pre-compiled modules", since it's all compiled) - it's a distribution
nightmare (can I precompile this, or do I need to deploy just the source)
and I certainly don't like a language change solely due to the source format.
It may very well be too difficult - I don't know. As I stated before, when
faced with an optimization that could be invalidated, with no easy way of
checking it, you have to make the decision to do it or not. The current
(and probably most correct decision) is to not do it.
However, off-the-cuff talks about particular optimizations invariably end
with, "But we can't do that, because {something} can happen at runtime."
I'm simply suggesting that if we can easily detect that *something* did
happen, there might be a way to not have to reject optimization from the
get-go.
--
Bryan C. Warnock
[EMAIL PROTECTED]