How much can we do in the compiler, and how much can we do in the
interpreter? If we're having cached bytecode, it makes sense to do
as much optimization as we can in the compiler. If not, we might
as well brute force things to conserve compilation time. Or should
this be user-selectable with "use less"?
I'm still thinking about things like type inferencing to reduce the
number of conversions that need to be done at runtime.
--
Morton's Law:
If rats are experimented upon, they will develop cancer.