At 09:12 PM 9/3/2001 -0700, Brent Dax wrote:
>From: Dan Sugalski [mailto:[EMAIL PROTECTED]]
># At 07:05 PM 9/3/2001 -0700, Brent Dax wrote:
># ># From: Dan Sugalski [mailto:[EMAIL PROTECTED]]
># ># At 05:30 PM 9/3/2001 -0700, Brent Dax wrote:
># ># >As far as expensiveness, I think this can be just as fast as
># ># our current
># ># >offset-into-the-pad method.
># >#
># ># I was speaking in both speed and memory use when I was
># talking about
># ># expense. We'd need to maintain a hash structure for each pad,
># ># plus we'd
># ># need to either design the hash structure such that it didn't
># ># need absolute
># ># addresses (so we could build it at compile time, which could
># ># be a long time
># ># before runtime with a disk freeze or two and an FTP in the
># ># interim), or
># ># we'd need to patch the addresses up at runtime when we
># ># allocated a new pad.
># >
># >I assume we're willing to have more fixup time for runtime
># performance,
># >correct?
>#
># Yes. But fixup is a runtime cost, so we need to weigh what
># the fixup costs
># versus the return we get from it.
>
>But it's a one-time runtime cost, unlike, say, a string eval in a loop.

People who do string eval in a loop deserve what they get. Probably even 
more than that. (If you invoke the compiler that often at runtime, well, 
tough--your performance is probably going to suck... :)

># ># I'm not convinced the memory usage, and corresponding time to
># ># clone and/or
># ># set up the hash-based pad, is worth the relatively
># infrequent by-name
># ># access to variables in the pad. I could be wrong, though.
># ># We'll have to try
># ># it and see. (Shouldn't affect the bytecode, however, so we can try
># ># different methods and benchmark them as need be)
># >
># >By using something similar to temp() (where the SV* is temporarily
># >replaced), cloning should only be necessary for situations
># in which two
># >threads are running the same function at the same time.
>#
># Nope, I'm talking about recursion. When you do:
>#
>#    sub foo {
>#          foo();
>#    }
>#
># we need to clone foo's pad from the template, because we need
># a new one.
># Otherwise that whole lexical variable/recursion thing doesn't
># work, which
># is A Bad Thing. :)
>
>Now is where the temp() stuff I was talking about earlier comes in.

No. Doesn't work. Closures are screwed--you *need* a separate pad for each 
scope entry, because you need to keep a handle on those pads as far back as 
we have to for things to function properly.

This also makes scope entry and exit costlier, since you need to make a 
savestack entry and restore, respectively, for each lexical. I don't think 
it'd be a win, even if closures weren't getting in your way.

It also means more per-thread data, since then the pointer to "the scope's 
pad" (and each scope *still* needs a pad--you can't have a single global 
structure here) would need to be tied to a single fixed location, which 
means one more pointer for the sub in threadspace. (Though that's less of a 
big deal, I expect)

>If we did this, I don't think the cost would be greater to recurse than
>it would be for array-of-arrays.  (Especially since we'd make sure to
>optimize the hell out of temp.)  This would also lead to less code to
>write and a smaller binary.  Plus a simple way to do static: don't
>temp()orize the variable on entry.

Nope, just won't work. Not to mention that peeking back outside your 
current scope via MY tricks wouldn't work right when recursing. We'd either 
need to walk back out the savestack (ick) or you'd end up peering at 
yourself when you wanted the previous recursion.

                                        Dan

--------------------------------------"it's like this"-------------------
Dan Sugalski                          even samurai
[EMAIL PROTECTED]                         have teddy bears and even
                                      teddy bears get drunk

Reply via email to