On 29 Aug 2000, Russ Allbery wrote:
> Dan Sugalski <[EMAIL PROTECTED]> writes:
>
> > 2) Having a mechanism to automagically load in chunks of executable code
> > only when needed would be nice
>
> It's not clear to me how useful this really is from an internals speed
> standpoint on modern systems. It's no longer always true that increasing
> the size of an executable will make it start slower or consume more
> memory, and I expect that to become less true over time. And loading
> dynamic libraries is actually fairly slow; static code loads faster
> because it doesn't have to do the relocations and the additional disk
> accesses.
I do understand that a fully static link does tend to be faster, since all
the relocation fixups are done at link time, even on those systems with
efficient link-time resolution. (And the corresponding lack of flexibility
that goes with it)
As I said, this may very well not be a win in the general case for many
systems. Odds are most recent versions of Unices won't win anything from
separating out things from the core, and neither will Windows or VMS.
However, this point was vague on purpose. It's *not* referring only to
pieces deemed "really, really core perl" (like, say, POSIX) nor just at
the latest and greatest in OS technology. (SunOS 4.x anyone? Or the Palm
or embedded systems that might end up referencing ROM-based (and usually
slower) overlays?)
It's not unreasonable to expect this sort of feature to possibly be used
for more esoteric extensions to the perl core or commonly and heavily used
extensions. I wouldn't, for example, want to always load in DBD::Oracle or
a full complex math library (complete with FFT and gaussian filters) every
time I was ripping through a text file.
If the feature exists as part of the design from the start, it puts
certain requirements for the lexer/parser and core interpreter that will
make modularizing things a neccessity and thus functional for those
situations where it is reasonable to do it.
Dan