On Tue, Dec 12, 2000 at 02:20:44PM +0000, David Mitchell wrote:
> If we assume that ints and nums are perl builtins, and that some people
> have implemented the following external types: byte (eg as implemented
> as a specialised array type), bigreal, complex, bigcomplex, bigrat,
> quaternian; then the following table shows how well my system copes:
> 
> num - int             gives accurate num
> int - num             gives accurate num

what happens if the size of int is such that the maximum int is larger than
the value at which num's can no longer maintain integer accuracy?

for example 8 byte doubles as num, 8 byte longs as int ?

does one promote a num in the range (min_int, max_int) to a num, and do
an int calculation?

for example (2**62) - 1

> With the sv1->sub[typeof(sv2)](sv2) scheme, even something as simple as
> byte - bigreal is problematic, as this would cause byte->sub[GENERIC] to be called,
> which has very little chance of 'doing the right thing'.

unless it uses your scheme at this point.
[this might be the correct speed tradeoff - common types know how to 
interact with common types directly, and know how to call a slower but
maximally accurate routine if they are beyond their competency]

It's surprising how easy it is to slow things down with decision making code
in your arithmetic ops. I'm trying to coax perl5 into doing better 64 bit
integer arithmetic:

http://www.xray.mpe.mpg.de/mailing-lists/perl5-porters/2000-12/msg00499.html

and the simple hack of trying to make scalars IV (signed) rather than
UV (unsigned) whenever possible, and consequentially going to the first
code in each op (IV OP IV) gives about 2% speed up (timing for the perl5.7
regression tests)[note, this doesn't make it faster, just claws back the
slowdown other parts of my provisional changes have made]

hangon, there was a point that was supposed to back up. Accuracy is needed,
but I fear that a single general scheme to deliver this will slow down the
common cases.

Nicholas Clark

Reply via email to