Sara Golemon wrote:
> For reasons best left on IRC, it looks like I'll be working on runtime
> JIT.  To that end, I've come up with a few proposals of varying
> complexity and feature-set completeness:
> 
> Option 1:
> Dump support for compile-time JIT and replace it with a call at runtime
> using the same semantics.
> 
> Advantages: No change in the API (well, no further change anyway,
> Unicode support pretty much guarantees that things will change regardless).
> 
> Disadvantages: Could someone be relying on compile-time JIT for
> something already?  Maybe activation triggers an action which has to
> take place prior to script execution?  For what I've seen JIT isn't in
> heavy use, but my perceptions on the topic aren't definitive.

I have a feeling this won't break much, if anything, but I am not sure
this is the best approach for Unicode encoding (see my response to
Option 4).

> Option 2:
> Leave compile-time JIT alone, and add a second callback for runtime JIT.
> 
> Advantages: Doesn't break BC, and offers extensions the chance to know
> that the code contains autoglobal references without actually having to
> act on them unless they're needed.
> 
> Disadvantages: Adds to complexity/confusion by having two separate
> callbacks for essentially the same action.

What would compile-time JIT do here?  Just create a bunch of binary
elements that are then overwritten at runtime with the encoded elements
on access?  This doesn't seem like a good idea either as the
compile-time version would almost always be completely redundant,
wouldn't it?

> Option 3:
> Extend JIT signature with a "stage" parameter to indicate if the JIT
> trigger is occuring during compile-time or run-time.  The callback can
> decide when/if it performs processing using current return value
> disarming semantics.

I think we'd confuse people with that.  We should pick one and stick
with it.

> Option 4:
> Include fetchtype and subelement during runtime JIT callback allowing
> JIT callback to only do work necessary to prepare for the read/write
> call being performed.

I like this approach.  Getting right down to the individual GPC entries
avoids what could potentially be crippling overhead iterating through a
lot of fields which may never be used.  It also solves the issue of what
to do in case of a conversion error.  When you convert an entire array
at once as current compile-time JIT does, what happens when a single
entry has a conversion error?  How do you propogate the error to the
user?  And what if the error is on an element the user doesn't care
about?  In fact, a bad guy could simply add random elements full of
bogus data to trigger these errors.  By taking this approach we avoid
these poisonous entries and any encoding errors can be reported back to
the user right when they happen.  When you toss error handling into the
mix I don't think this is the most complex solution as you indicated.  I
think this actually simplifies things a lot.

-Rasmus

-- 
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to