Stefan Krah <stefan-use...@bytereef.org> added the comment:

For the record, I prefer Python's behavior. The quantize() definition
does not work well for arbitrary precision input and leads to situations
like:


Precision: 1
Maxexponent: 1
Minexponent: -1

tointegral  101  ->  101
tointegral  101.0  ->  NaN  Invalid_operation


A comment in tointegral.decTest suggests that the to-integral definition 
was modeled after IEEE 854 and 754r:

-- This set of tests tests the extended specification 'round-to-integral
-- value' operation (from IEEE 854, later modified in 754r).
-- All non-zero results are defined as being those from either copy or
-- quantize, so those are assumed to have been tested.
-- Note that 754r requires that Inexact not be set, and we similarly
-- assume Rounded is not set.


This definition of course works fine as long as the input does not
have more than 'precision' digits and Emax is sufficiently large. I
think that for arbitrary precision input the definition should read:


"Otherwise (the operand has a negative exponent) the result is the
 same as using the quantize operation using the given operand as the
 left-hand-operand and 1E+0 as the right-hand-operand. For the purpose
 of quantizing a temporary context is used with the precision of the
 operand as the precision setting, Emax >= prec and Emin <= -Emax. The
 rounding mode is taken from the context, as usual."

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue11128>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to