On Sun, 03 Aug 2008 17:30:29 -0500, Larry Bates wrote: >> As you can see, the last two decimals are very slightly inaccurate. >> However, it appears that when n in 1/n is a power of two, the decimal >> does not get 'thrown off'. How might I make Python recognise 0.2 as 0.2 >> and not 0.20000000000000001? >> >> This discrepancy is very minor, but it makes the whole n-th root >> calculator inaccurate. :\ > > What are they teaching in computer science classes these days?
I don't know about these days, but 20-odd years ago there was no discussion of floating point accuracy in the Comp Sci classes I did at Melbourne Uni. I did a class in computational mathematics, run by the maths department, and it discussed a lot of issues about accuracy in float calculations. However, if they mentioned anything about e.g. 0.2 not being exactly representable in binary, I slept through it. Maybe that's why I failed that class. *wry grin* -- Steven -- http://mail.python.org/mailman/listinfo/python-list