On 2007-05-09, Robert Kern <[EMAIL PROTECTED]> wrote: > Grant Edwards wrote: >> I'm pretty sure the answer is "no", but before I give up on the >> idea, I thought I'd ask... >> >> Is there any way to do single-precision floating point >> calculations in Python? >> >> I know the various array modules generally support arrays of >> single-precision floats. I suppose I could turn all my >> variables into single-element arrays, but that would be way >> ugly... > > We also have scalar types of varying precisions in numpy: > > In [9]: from numpy import * > > In [10]: float32(1.0) + float32(1e-8) == float32(1.0) > Out[10]: True
Very interesting. Converting a few key variables and intermediate values to float32 and then back to CPython floats each time through the loop would probably be more than sufficient. So far as I know, I haven't run into any cases where the differences between 64-bit prototype calculations in Python and 32-bit production calculations in C have been significant. I certainly try to design the algorithms so that it won't make any difference, but it's a nagging worry... > In [11]: 1.0 + 1e-8 == 1.0 > Out[11]: False > > If you can afford to be slow, Yes, I can afford to be slow. I'm not sure I can afford the decrease in readability. > I believe there is an ASPN Python Cookbook recipe for > simulating floating point arithmetic of any precision. Thanks, I'll go take a look. -- Grant Edwards grante Yow! It's the RINSE at CYCLE!! They've ALL IGNORED visi.com the RINSE CYCLE!! -- http://mail.python.org/mailman/listinfo/python-list