On Tue, Oct 12, 2010 at 3:35 PM, cej38 <junkerme...@gmail.com> wrote:

> The more that I think about it, the more I would rather have a set of
> equalities that always work.  float= was a good try.
>
>
>
<RANT>

Every fucking language I've ever worked on has had this problem- "floats are
broken!"  And every single one, people keep coming up with the same wrong
answers to this.  C, Java, Ocaml, Haskell, SQL, now Clojure.  Makes me
wonder what language you are coming from where floats *aren't* broken.  Some
languages patch their print methods to hide the errors in the simple cases,
but I've yet to see one where 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1
+ 0.1 + 0.1 = 1.0.  Try it in the language of your choice.

First of all, the base of the floating point number just changes what
fractions "break".  For example, in base 10, 1/3 * 3 = 0.99999...  So no,
going to base-10 doesn't save you.  IEEE 754 floats are base-2, so 1/10th is
impossible to represent precisely, the same way 1/3 is impossible to
represent precisely in base-10.  Oh, and Clojure has, from it's Java roots,
a BigDecimal class which does do base-10 arithmetic.

And no, going to rationals doesn't save you either.  Take the vector [1, 0,
1], and normalize it so it's euclidean length is 1, without error.  Have a
nice day.  *Any* finite precision representation is going to have round off
error.  Oh, and Clojure does have a rational class.

And once you have round-off, you're going to have numeric errors.  At which
point, how you deal with those errors is important.  Numeric algorithms fall
into two broad categories- stable and unstable.  With stable algorithms,
errors tend to cancel each other, so your final answer is going to be pretty
close to correct.  With unstable algorithms, errors tend to accumulate (I'm
simplifying here for the newbies, for those who know what I'm talking
about).

No, throwing more bits at the problem won't save an unstable algorithm,
it'll just take longer for the unstable algorithm to finally destroy any and
all accuracy.  Double precision numbers give you enough precision to measure
the distance from here to the moon- in multiples of the wavelength of red
light.  Precisely.  Also note there is a difference between precision and
accuracy- if I say pi is equal to 3.179830482027405068272948472, that's very
precise- but not very accurate.  Unstable algorithms destroy accuracy, not
precision.

And no, ranged floats don't help either- as they grossly overestimate the
error of stable algorithms.  A classic example of a super-stable algorithm
is Newton's method- generally, the number of bits of precision is doubled
(or more) every iteration, as the algorithm converges to the answer. But
ranged floats have the error increasing every iteration.

Oh, and double precision floating point numbers (especially if they're
unboxed) are between 10x and 1000x as fast as other representations- thanks
to hardware acceleration.  Your CPU can execute one floating point operation
a clock cycle, two if it's vectorized code.

Floating point is not broken.

</RANT>

Brian

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en

Reply via email to