If you are giving a talk about Decimal -- and trying to stamp out the inappropriate use of floats you have to first inform people that what they learned as 'decimals' as children was not floating point, despite the fact that we write them the same way.
If I ever get the time machine, I am going back in time and demand that floating point numbers be expressed as 12345:678 instead of 12345.678 because it would save so much trouble. Never has the adage 'It's not what you don't know, that bites you. It's what you know that ain't so.' been more apt. I have done much better in speaking about this topic to a bunch of incredulous people if you next explain that scientists really don't care about accuracy in their calculations. (This will surprise them). Most scentific calculations have some real world measurement in them, and for most real world measurements, if you are getting even 5 digits of precision, you are doing really, really, well. This means that scientists are going to be throwing away all the extra digits they get out a a floating point representation, so they don't have to care how accurate they are. As long as their results are good in the first 5, it won't matter. (Depending on time constraints, a review of significant figures -- what they are and what they mean is good here.) It is really hard to get the concept of Decimal across to people who already have that concept in their mind, but think it is called Float. You have to first teach them that they don't know anything about Float and get them to reboot their brains before you can install this new knowledge. Otherwise their brains will just overwrite your new knowledge with 'ah, just use a Float' as soon as you stop speaking. Same day, even. Laura -- https://mail.python.org/mailman/listinfo/python-list