In article <54c2299d$0$13005$c3e8da3$54964...@news.astraweb.com>, steve+comp.lang.pyt...@pearwood.info says... > > I don't think that a raise of 0.10000000000000001 (10%), > 0.035000000000000003 (3.5%) or 0.070000000000000007 (7%) is quite what > people intended. > > (Don't use binary floating point numbers for anything related to > money. Just don't.)
Few rules are written in stone and that rule isn't either. It all depends on the context. People often read these bits of well intentioned advise and go on to immediately make some kind of gospel out of it. Why is that? And why is that? Why do you think I can't ever use floats for money? Is there some kind of unspoken rule that money must always be dealt with with enough precision to give a microscope an headache? - What if I am storing money as an integer or a 2 decimal place Decimal to manage my electrical bill and I use a float to express a percentage? - What if the float operation is not a repetitive operation that would indeed invariably lead to round errors, but instead a once in a lifetime operation? I'm not saying I don't agree we should avoid it. I'm saying we need also to properly contextualize it before we decide to do so. If I'm writing a banking system, or a POS, you will be damn sure it will be hard to spot a float in my code. But if I'm writing my household electrical bill yearly report, or I'm writting a damn code snippet on a python group to illustrate how type hinting mangles your code, who gives a flying arse if I'm using a float to express money? Sheesh! -- https://mail.python.org/mailman/listinfo/python-list