"Bryan C. Warnock" wrote:
> No.  We're optimizing on what we know at the time.

Um, it looked like you were generating a graph with all possible
optimizations inserted. At what point do you commit to an optimization?
If you only commit at run-time, you need to do a *lot* of state
management. It seems to me that the over-head will be too high.

> I think the only way your going to be able to detect dynamic redefinitions
> is dynamically.  :-)

Not really. Does the program include string eval? That throws out
optimizations like folding and dead code elimination that use *any*
global. If the program re-defines a single sub using an assignment to
a symbol table slot, then only that sub limits optimization. The
optimizer starts with a set of assumptions and alters those assumptions
as it sees more of the program.

> I don't agree with eliminating re-definitions of compiled code (presumable
> "pre-compiled modules", since it's all compiled) - it's a distribution
> nightmare

It has nothing to do with distribution. It has to do with programming
style. Certain programming styles are not compatible with optimization.
All I'm saying is that the optimizer might prevent you from doing
certain things. If you want to do those things, then don't use the
optimizer.

BTW, some techniques, like memoizing, are always possible. Since the
compiler can re-build the cache when it re-defines a memoized function
the state management cost is only paid at compile time. Other things,
like dead-code elimination, are not possible because you must commit
to the optimization. How do you de-optimize eliminated code?

Sometimes it pays to be lazy. The trick is knowning when.

- Ken

Reply via email to