At 10:52 PM +0200 4/16/02, Marco Baringer wrote: >Steve Fink <[EMAIL PROTECTED]> writes: > > On the other hand, I may be overlooking a good reason for adding >> these. The two reasons I can think of right now are (1) you've done >> benchmarking and combining these ops demonstrates a significant >> speedup or (2) some hardware architectures have native instructions >> for the loop ops that are measurably more efficient than the direct >> translation of the primitive op pair. (Though I doubt that; the >> primitives are at least as easy for the hardware to parallelize and >> neither possibility touches any more or less data memory.) > >Neither (1) nor (2) was on my mind when i wrote the ops. it was a >pruely a my-hand-written-pasm-code-would-be-cleaner-if kind of >thing. it boils down to who is the byte-code aimed at? compilers or >people? if parrot was truly aimed at compilers it would be RISCy and >PMCs + multi-method ops wouldn't exist (at least not directly in >Parrot).
That's not actually true. Being aimed at compilers, it makes more sense to make Parrot CISCy. There are a few reasons why: 1) CISCy Parrot translates better to RISCy native code. 2) RISCy interpreters waste a lot of time. 3) Efficient RISCy compilers are tough, and we've limited resources. Taking the points in order: 1) While RISC is, at this point, considered generally a win for hardware, each CPU has different performance characteristics. A naive translation to native code would tend to favor whichever processor that happened to match what we chose, to the detriment of the others. A more sophisticated translation is possible, but it's actually more difficult to go from one RISC to another than to go from a CISC to a RISC. 2) If we don't translate to machine code, then we have to pay a lot more in opcode overhead. While RISC or CISC opcodes have the same overhead, if we need to execute three or four times as many opcodes, we have three or four times as much overhead to do the same work. Ick. 3) It's a lot easier to write a CISC compiler than a RISC one, especially if you're in a position (which we are) to define the opcodes. Translating this: @foo = map \&bar, @baz; to this: getvar P0, '@foo' new P1, List getvar P2, '@baz' push P1, P2 getvar P2, '&bar' map P0, P2, P1 is trivial. A more RISC-like translation would be, well, less than trivial. >If parrot is aimed at people hand writing parrot assembler >then the looping ops make sense. this being the real world parrot is >somewhere in between these two, so i would like to put the question >differently: > > loop and friends make my life as an assembler writer easier, do > they have any detrimental effects on parrot? That's a very good question. People writing hand-assembly aren't the real target for Parrot, people writing compilers are. No reason to leave the hand-assembly folks out. The two issues are: 1) Conceptual size 2) runtime size #2's not a huge issue, but I do occasionally worry about it. #1's a bigger issue. The number of opcode functions in core.ops and the other ops files (the functions in the generated C files are mostly irrelevant, as they're machine-generated and we don't need to maintain them) which we need to actually write code for. There are still a fairly large number of opcode functions we need to add to the interpreter, and I'm a little hesitant to add too many variants on a particular theme until we're closer to the end and can judge how much mental room we have left. Having said that, I do like the ops, and we'll probably add something like them in. -- Dan --------------------------------------"it's like this"------------------- Dan Sugalski even samurai [EMAIL PROTECTED] have teddy bears and even teddy bears get drunk