On Tue, Aug 14, 2001 at 07:34:26AM -0700, David Roundy wrote: | On Mon, Aug 13, 2001 at 12:37:45PM -0500, Dimitri Maziuk wrote: | > * Craig Dickson ([EMAIL PROTECTED]) spake thusly:
| > > I don't see how. I see it as a legitimate compiler optimization. If you | > > have "double f = 4;", and you compile 4 as a double-precision value | > > rather than as an int (which would then require an immediate | > > conversion), how could that possibly break a program? | > | > Very simple: double f = 4 may be converted to eg. 4.000000000000000001234, | > and any test for (sqrt(f) == 2.0) will fail. Of course if your (generic | > "you", not personal) code is like that, you probably shouldn't be playing | > with floats. | | Actually, any 32 bit int will be exactly converted into a double with no | loss of precision... | | As far as the language definition goes, if you say double f = 4, the | language assures you that the '4' will be converted to a double format. | Whether it is done at compile time or at runtime makes no difference. The point is that binary FP can only represent a subset of floating point numbers. For example .1 can NOT be represented exactly by binary FP. If you ever get some floating point operations to yield the exact value you are looking for (2.0 in the above example) you are lucky. Most likely the operation will yield the value you are looking for, plus or minus some epsilon. It's not that floating point is inexact or inaccurate, just that rounding errors and limits of representation tend to compound each other as more operations are performed. It is best to check FP values with a range (use < <= > or => , not == or !=) or convert them to an int first (which will obviously lose any precision beyond the binary point). -D