At 18:32 +0000 7/31/07, peter baylies wrote: >On 7/31/07, Paul Cochrane <[EMAIL PROTECTED]> wrote: > > return (fabs(x - y) <= fabs(x + y)*EPSILON) ? 1 : 0; > >That may not be a bad idea, but I think there's a bug in that code -- >take, for example, the case where x and y both equal approximately a >million (or more). > >Maybe you wanted this instead: > > return (fabs(x - y) <= EPSILON) ? 1 : 0;
This physicist thinks Paul is right here. His formula is equivalent to allowing larger variations when the numbers are large. That's a logarithmic approach that makes sense for very large or very small numbers. Numbers can be considered equal if they vary by less than some small fraction of their sum. Actually it's pretty much the same as masking off a few bits at the right end of the mantissa in a pair IEEE floats. That works if the items being are results of calculations that are known to be normalized, and they're not in the super-large range where the mantissa is less than 1/2, and we're not dealing with NAN's. . . There are reasons for checking for complete exactness though. Telephone numbers are best treated as strings but a lot of less mathematical IT folks allow a type-less compiler to assign them to 10 digit floats. Nearly correct might not be good enough for that especially if an extension is added after a period. Testing for exactly zero should be possible. And minus zero? That reminds me too much of ones complement arithmetic on a Control Data 3800. -- --> From the U S of A, the only socialist country that refuses to admit it. <--