On 3 Apr 2008, at 07:59, Henning Thielemann wrote:
But one should also be able to write (f+g)(x). - This does not
work in Haskell, because Num requires an instance of Eq and Show.
You could define these instances with undefined function
implementations anyway. But also in a more cleaner type hierarchy
like that of NumericPrelude you should not define this instance,
because it would open new surprising sources of errors:
http://www.haskell.org/haskellwiki/Num_instance_for_functions
This problem is not caused by defining f+g, but by defining numerals
as constants. In some contexts is natural to let the identity be
written as 1, and then 2 = 2*1 is not a constant. With this
definition, a (unitary) ring may be identified with an additive
category with only one object.
In mathematical terms, the set of functions is a (math) module
("generalized vectorspace"), not a ring.
Anyway, Num is a type for unifying some common computer numerical
types, and not for doing algebra. If its (+) is derived from an
additive monoid (or magma) type, then defining f+g will not interfere
with Num.
Hans
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe