# -----Original Message-----
# From: Bryan C. Warnock [mailto:[EMAIL PROTECTED]]
# Sent: Saturday, September 01, 2001 12:29 PM
# To: [EMAIL PROTECTED]
# Subject: Deoptimizations
#
#
# Random musings from a discussion I had yesterday. (And check
# me on my
# assumptions, please.)
#
# One of the more common lamentations is that a dynamic
# language (like Perl)
# doesn't mix well with optimizations, because the majority of
# optimizations
# are done at compile time, and the state at compile time isn't
# always the
# state at runtime. A common declaration is, "We'd like to
# optimize that, but
# we can't, because foo may change at runtime."
#
# Perl 5 optimizations replace (or, more accurately, null out)
# a more complex
# opcode stream [1] with a simpler one. Constant folding is
# one such example.
#
# 5 -> add -> 10 -> add -> 15 -> store
#
# becomes
#
# 30 -> store
#
# plus a bunch of null ops. (The null ops are faster than
# attempting to
# splice the new opcode stream [1] in place of the old one, but
# I don't know
# by how much.)
#
# Consider the following:
#
# Create an optimization op, which is more a less a very fancy
# branch operator.
# Whenever you elect to do an aggressive optimization, place
# the opt op as a
# branch between the newly created... [1] and the old, full one.
#
# The op could decide ( From a switch or variable - turn on and off
# optimizations while running. Or from state introspection,
# perhaps, since you
# probably have a good idea of what changes would invalidate
# it. ) whether to
# exercise the optimized code, or revert to the original,
# unoptimized version.
# I supposed, if you were to implement an advanced JIT, you
# could replace an
# invalidated optimization with its newly optimized variant.
#
# That would also work with a couple of tie-ins with the
# language. First,
# of course, the ubiquitous pragma, which could affect which
# optimizations
# (assuming we categorized them) we should run, and which we
# shouldn't, based
# on the suggestions from the programmer. And perhaps some
# hook into the
# internals for the same reason.
#
# sub foo {
# no optimizations;
# ...
# }
#
# or
# {
# local $opt = (ref $obj eq "SomeNewObject");
# # If the $obj has changed, don't run any optimizations
# }
#
# Is this possible? Feasible? Way out there?
I think it's a good idea! ++[bwarnock]!
Of course, the hard part is detecting when the optimization is invalid.
While there are simple situations:
sub FOO {"foo"}
print FOO;
evaluating to:
/-no------"foo"-----\
opt: FOO redefined? -< >---print
\-yes-----call FOO--/
there could also be some more complicated situations, in which the
situations where the optimizations are invalid are harder to define.
I'd also suggest a different pragma:
use less 'optimization';
--Brent Dax
[EMAIL PROTECTED]
"...and if the answers are inadequate, the pumpqueen will be overthrown
in a bloody coup by programmers flinging dead Java programs over the
walls with a trebuchet."