In article <[EMAIL PROTECTED]>,
Gary Herron <[EMAIL PROTECTED]> writes:
|> 
|> The IEEE standard specifies (plus or minus) infinity as the result of
|> division by zero.  This makes sense since such is the limit of division
|> by a quantity that goes to zero.  The IEEE standard then goes on to
|> define reasonable results for arithmetic between infinities and real
|> values.  The production of, and arithmetic on, infinities is a choice
|> that any application may want allow or not.

The mistake you have made (and it IS a mistake) is in assuming that the
denominator approaches zero from the direction indicated by its sign.
There are many reasons why it is likely to not be, but let's give only
two:

    It may be a true zero - i.e. a count that is genuinely zero, or
the result of subtracting a number from itself.

    It may be a negative zero that has had its sign flipped by an
aritfact of the code.  For example:

    lim(x->0 from above) 0.001*b/(a-1.001*a)

I fully agree that infinity arithmetic is fairly well-defined for
most operations, but it most definitely is not in this case.  It should
be reserved for when the operations have overflowed.


Regards,
Nick Maclaren.
-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to