On Sun, Dec 09, 2001 at 07:46:46PM -0500, Bryan C. Warnock wrote: > Proposal: > > For background, revisit my proposed Bytecode Format (v2) at > http:[EMAIL PROTECTED]/msg05640.html. > Although it is outdated, is gives a general gist of the direction of my > thinking. In particular, pay no heed to the incremental, relative > addressing of each section. By capping bytecode to an arbitrary size, > we should be able to do direct indexing. > > - All bytecode is by default written in native endianness. This > maximizes efficiency for the native format (goal 1), and leaves reading > by other platforms efficient (goal 2). Alternately, the user should be > able to write or convert bytecode to another format.
That begs the question: what's the canonical representation of bytecode? It's not a difficult question; blessing one of the possible formats as "canonical" should be sufficient. This part of the problem is really a non-issue and a SMOP; it sounds suspiciously like reinventing TIFF's endian sensitivity, with the added hiccups of 32-/64-bit alignment and floating point representation... The divergence form the "One True Bytecode Format" that some coffee vendors promote is noted, and interesting... > - We set a 2 GB hard cap on bytecode files, and define a continuation > policy. (Although, personally, if we produce files of that size, > *somebody* needs to be shot.) ....or simply specify that the m/XPR|EHD|JTZ/ instruction must appear at the 2GB position of a bytecode file that's > 2GB. :-) > Other considerations: > > Given that bytecode is simply data, many of the solutions overlap with > other areas of Parrot and Perl, such as within pack and unpack. If > implemented, the user can pack and unpack arbitrary data for or from an > arbitrary platform. Hmmmm. Would it be interesting/meaningful/worthwhile to add support for (un)?pack to read/write PBC? > - I've code that currently converts 32, 64, 96, and 128 bit floating > point representations among all but the IBM format [...] Um, wow. I'm impressed. :-) Z.