At 01:01 AM 3/10/2001 +0100, Paolo Molaro wrote:
>On 03/05/01 Dan Sugalski wrote:
> > =item Arbitrary precision integers
> >
> > Big integers, or bigints, are arbitrary-length integer numbers. The
> > only limit to the number of digits in a bigint is the lesser of the
> > amount of memory available or the maximum value that can be
> > represented by a C<UV>. This will generally allow at least 4 billion
> > digits, which ought to be far more than enough for anyone.
>
>During the RFC process there was a lot of talk about reducing
>the perl core and while there were different views on _what_
>should be removed if at all, I don't think that including bigint
>and bigfloat was considered for that goal:-)
There was a lot of talk, yes. It wasn't about this, though, because proper
handling of numerics is a given, and that requires bigint/bigfloat support
built into the core.
>The core needs to be aware of overflows and have hooks to plug
>an external bigint implementation when that happens, but should not
>demand a specific bigint implementation.
It doesn't. The core will provide a bigint and bigfloat implementation. I'd
bet we won't see any alternative implementations, but people are certainly
welcome to write them.
>I can't find the reference now, but it seams it will be required
>for the base integer type to know how to upgrade to a bigint.
Nope, you're misremembering. Overflow and underflow detection will
certainly be required inside the math routines for the native types, which
is where it belongs. Data conversion will be handled by library code, which
will probably be modular enough to replace if need be.
Nothing in the core will assume anything about the format of the internals
of any variable, other than the variable's vtable code, unless we need to
get evil for speed.
Dan
--------------------------------------"it's like this"-------------------
Dan Sugalski even samurai
[EMAIL PROTECTED] have teddy bears and even
teddy bears get drunk