Michael G Schwern wrote:
TSa (Thomas Sandlaß) wrote:
I want to stress this last point. We have the three types Int, Rat and Num.
What exactly is the purpose of Num? The IEEE formats will be handled
by num64 and the like. Is it just there for holding properties? Or does
it do some more advanced numeric stuff?

"Int", "Rat" [1] and "Num" are all human types.  They work like humans were
taught numbers work in math class.  They have no size limits.  They shouldn't
lose accuracy. [2]

As soon as you imply that numbers have a size limit or lose accuracy you are
thinking like a computer.  That's why "num64" is not a replacement for "Num",
conceptually nor is "int64" a replacement for "Int".  They have limits and
lose accuracy.

All agreed.

[2] "Num" should have an optional limit on the number of decimal places
    it remembers, like NUMERIC in SQL, but that's a simple truncation.

I disagree.

For starters, any "limit" built into a type definition should be defined not as stated above but rather with a simple subtype declaration, eg "subtype of Rat where ..." that tests for example that the Rat is an exact multiple of 1/1000.

Second, any truncation should be done at the operator level not at the type level; for example, the rational division operator could have an optional extra argument that says the result must be rounded to be an exact multiple of 1/1000; without the extra argument, the division doesn't truncate anything.

Any numeric operations that would return an irrational number in the general case, such as sqrt() and sin(), and the user desires the result to be truncated to an exact rational number rather than as a symbolic number, then those operators should have an extra argument that specifies rounding, eg to an exact multiple of 1/1000.

Note, a generic numeric rounding operator would also take the "exact multiple of" argument rather than a "number of digits" argument, except when that operator is simply rounding to an integer, in which case no such argument is applicable.

Note, for extra determinism and flexibility, any operation rounding/truncating to a rational would also take an optional argument specifying the rounding method, eg so users can choose between the likes of half-up, to-even, to-zero, etc. Then Perl can easily copy any semantics a user desires, including when code is ported from other languages and wants to maintain exact semantics.

Now, as I see it, if "Num" has any purpose apart from "Rat", it would be like a "whatever" numeric type or effectively a union of the Int|Rat|that-symbolic-number-type|etc types, for people that just want to accept numbers from somewhere and don't care about the exact semantics. The actual underlying type used in any given situation would determine the exact semantics. So Int and Rat would be exact and unlimited precision, and maybe Symbolic or IRat or something would be the symbolic number type, also with exact precision components.

Come to think of it, isn't "whatever" how Num is already defined? If so I think that is clearly distinct from Rat.

-- Darren Duncan

Reply via email to