Mark Dickinson <dicki...@gmail.com> added the comment:

> For example C# uses 80 bit precision

No, I don't think that's true.  It uses the x87, with its 64-bit internal 
precision, but I'm fairly sure that (as is almost always true in a Windows 
environment, except if you're using Delphi, apparently) the FPU precision is 
still set to 53-bit precision.

> if I understand 
> http://blogs.msdn.com/b/davidnotario/archive/2005/08/08/449092.aspx well.

And that article explicitly confirms the use of 53-bit precision:

"Precision is set by default in VC++ and CLR apps to ‘double precision’, which 
means that if you are operating with operands of type float, results of 
operations done with floats actually exist in the x87 stack as if there were of 
type double. In fact, it’s even weirder than that. They will have the mantissa 
of a double, but the range (exponent) of an extended double (80 bit)."

i.e., it's using the x87 FPU with precision set to 53 bits.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue9980>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to