Dan Sugalski wrote:
At 5:46 PM +0200 10/21/02, Leopold Toetsch wrote:
With an approach like this, we could cut down the VTABLE to roughly 1/3 of it's current size. The _keyed entrys would only consist of the set_.._keyed{,_int} variants plus exists_keyed and defined_keyed. And, we would never have the opcode explosion to 64 times of current size.
The big disadvantage here is speed. It means that specialized aggregates will have to create temporary PMCs for things that don't already have them, which is potentially slow and wasteful, something I'd rather avoid.
No, take the "new px, .PerlUndef" out of the opcode, it must only be done once. This temporary Px can be reused by _all_ _keyed ops.
If the LHS doesn't exist, like in my example, a new PMC has to be created anyway. If the LHS exists, the set_value methods assigns a new value.
The only difference of my solution is +4 opcode dispatches worst case minus some checks, we now have, that a key exists. These checks in each _keyed method could be tossed then.
And for my demonstration I did use current existing set_ ops. I could imagine, thats - as in my proposal - special key_ ops could be used, which have the variable/value split built in, i.e. they could prepare pointers to the PMC values directly.
Encouraging the use of specialized aggregates is one of the reasons for the typing system coming in with perl 6, and given the size of aggregates in perl 5 I think it's something that will see some heavy use.
The HL language can and will use multi_keyed operations. Sean already stated, that probably every (local) variable usage will be a multi_keyed operation, where the aggregates are the lexical scope pads.
I just hide these operations in the assembler (or imcc currently).
I don't mind the opcode explosion, honestly. It's automatically generated, and that's not a big deal. There are other ways to cut down on it as well, if we find the need.
I do mind. We currently have ~900 opcodes. We would have ~60000 opcodes. I can't imagine, that the CPU cache will be happy with that many opcodes. You could multiply this by 4 (normal, CGoto, CPderef, JIT) for an estimation of a final full fledged parrot executable size.
For the moment, I'd rather things stay the way they are. If we can produce demonstrable speed wins, then I'll change my mind.
It's hard to demonstrate speed wins against a system we don't have. But I'll have a closer look, how to simulate this.
... For now, though, things stay generally the way they are. We can do some mild reworking to get things manageable if they're currently really unmanageable.As I already demonstrated wtih my op_stat, currently 1/3 opcodes is unused, neither parrot tests, nor any perl6 program produces this ops.
With the final ~60000 opcodes, we would probably have 1000 used ops, scattered over a huge program - this is unmanagable IMHO.
leo