On Tue, Oct 29, 2002 at 08:26:00AM +0100, Leopold Toetsch wrote: > Dan Sugalski wrote:
> > I'm currently leaning against it only because it doesn't ultimately help > > the JIT. What we have now is wildly cool and damn useful (and has anyone > > heard from Daniel lately, BTW?) but there's room for more optimizations. > > > Yes, that's correct. JIT wouldn't profit currently. But with an > optimized stream of (micro-)Ops, having optimzed fetch/store opcodes not > in (basic-) block but finer granularity, JIT could profit too. Also the > JIT-optimizer now run at load time would be done at compile time, so JIT > startup time would be cut down. But then you end up with a messier two level register spillage problem at compile time, don't you? Which values to spill from fast to slow registers, and which values to spill further from slow to stack? And is there much literature on this sort of thing? > My hack with the 3 globals includes obviously some cheating, globals are > a nono, when having multiple interpreters. But nethertheless we could > produce an optimized PBC stream, where the 3*4 registers are treated as > "fast" registers, with load/store to the 32*4 slower registers only when > necessary. This would also fit neatly with my proposal WRT keyed access. And the fast registers are going to be called ax, bx, cx and dx? :-) > I was also thinking of the various fixed sized integer ops for JVM or > C#. The load/store ops would prepare integers of needed size and do sign > extension when necessary. I've had 3 drafts at responding to this, and I conclude "my brain hurts" I don't see an "obvious" clean solution to this, specifically 64 bit ops that run correctly on 32 bit native systems, but take advantage of 64 bit native systems. Nicholas Clark