On Sat, Apr 16, 2005 at 10:36:37AM +0200, Leopold Toetsch wrote:
: Larry Wall <[EMAIL PROTECTED]> wrote:
: > Perl 6 tends to distinguish these as different operators, though Perl 5
: > did overload the bitwise ops on both strings and numbers, which newbies
: > found confusing in ambiguous cases, which is why we changed it.
: 
: We have distinct functions for bitwise shift int and string. That's no
: prblem. But can the right operand be anything different then a plain
: integer?

I don't think Perl cares.  (Can't speak for other languages.)  I do think
bit shifts tend to be in very hot code for crypto routines, so any hints
we can give the optimizer would help those apps.

: Above are only the PMC variants. There are optimized forms for array and
: hash lookup by native types:
: 
:   Px = Py[Iz]
:   Px = Py[Sz]

Is there a bitarray lookup by native int?

: But with PMCs we seem to have a bunch of different key-ish PMCs,
: including a BigInt PMC for bitarrays.

I don't mind the general MMDish cases, as long as they don't get
in the way of a writing a devilish fast LINPACK in Perl 6 without
too many contortions.  And we can certainly contort Perl 6 to our
hearts' content, but I'm just trying to figure out whether ordinary
arrays default to shape(int) or shape(Int).  My gut feeling is that
defaulting to shape(int) is going to buy the optimizer something, but
it's just that, a gut feeling.  (That, and the fact that plural slice
subscripts are generally visible to the compiler, because we don't
automatically interpolate $foo into a list even if it's a sublist.
So we generally know when we're doing a singular subscripting op.)

: With MMD we'd have one function per key. Without the usual cascaded if
: statements:
: 
:   if key.type == Int
:      ...
:   elsif key.type == Slice
:      ...
: 
: >From a performance POV, MMD is faster with optimizing run cores that can
: rewrite the opcode and about the same speed with a plain MMD function cache.

Yes, but there's still got to be some internal overhead in deciding
whether the run-time Int object is representing the integer in some
kind of extended bigint form.  (Or are you meaning Int as Parrot's
native integer there?)  Plus if you're optimizing based on run-time
typing, there has to be some check somewhere to see if you're
assumptions are violated so you can pessimize.  That sort of check
can be factored out to some extent, but not to the same extent that a
compiler can factor it out with sufficient advance type information,
either direct or inferred.  (At least, that's my assumption.  I don't
claim to up-to-date on the latest in optimizing run cores.)

Basically, Perl[1-5] got a lot of performance out of assuming IEEE
floats were available and sufficiently accurate, whereas earlier
languages like REXX has to roll their own numerics to achieve accuracy.
I'd like to think that native integers will (soon) always be big
enough for most purposes, and am wondering how much it buys us to
stay close to the metal here.  (And whether the answer is different
for 32-bit and 64-bit machines, but that feels like a hack.)

Larry

Reply via email to