Thanks!
The main challenge in a static analysis as you describe is that the
badlands aren't compositional. The `fllog` function reduces error on
most of its domain, for example. Other places this crops up is
approximating special functions using polynomials: if you use Horner's
rule, it *tends to* result in something with much less error than you
might expect. I also ran into noncompositionality the other day, when I
told someone that exponentiated log probabilities can't be trusted when
they're close to zero... and then ran some simulations that suggested
the central limit theorem is nicely at play and averages out worst-case
error most of the time.
Oh, yeah, that was you that I accidentally lied to. :D
That being said, I've played around a bit with making languages that
reduce floating-point error automatically, and the most successful so
far just avoids the badlands at every intermediate computation. It does
a runtime search for formulations that don't increase error too much, by
backtracking when it wanders into the badlands with inexact arguments.
The produced code is in continuation-passing style, and when it's fully
inlined, it looks a lot like math library functions I've written:
(cond [input-in-subdomain-1? formulation-1]
[input-in-subdomain-2? formulation-2]
...)
It probably wouldn't scale well past library functions, but I could
imagine using it as part of a larger system that uses differentiation
(automatic, empirical, or user-supplied) to prune the search and deal
better with noncompositionality.
Neil
On 10/28/2014 01:21 PM, Laurent wrote:
This is a very nice article indeed. A lot of useful information.
Just a not-at-all-thought-through idea: Since we can know the badlands
of the primitives, couldn't we do something similar to an annotation
system that would tell you where the badlands of a function that
composes such primitives are?
For example, to each primitive is associated a set of critical points or
intervals possibly with typical error bounds, so as to know interesting
intervals where to do random testing, and possibly compute error bounds
(sometimes inf.0).
Probably not that simple I guess.
Laurent
On Mon, Oct 27, 2014 at 1:14 PM, Neil Toronto <neil.toro...@gmail.com
<mailto:neil.toro...@gmail.com>> wrote:
This is the article I mentioned in my RacketCon talk this year,
which is an expansion of my RacketCon talk from last year.
Self-serving Hacker News link:
https://news.ycombinator.com/__item?id=8514965
<https://news.ycombinator.com/item?id=8514965>
(Every week on Hacker News discussions, I see some of the same
misinformed arguments about floating point. This article addresses
most of them, so please go upvote it for the sake of informed
discussion. We do not want people being wrong on the Internet!)
A big thanks to Konrad, his co-editor Matthew Turk, and the CiSE
editing staff, who worked very hard with us on the article. Also,
thanks to Vincent Lefèvre (http://www.vinc17.org), Matthias
Felleisen, and Robby Findler, who reviewed it.
Neil
On 10/27/2014 06:05 AM, Konrad Hinsen wrote:
The current issue of "Computing in Science and Engineering" has
a nice
article by Neil Toronto and Jay McCarthy on "Practically Accurate
Floating-Point Math" in Racket. It is currently freely available
via IEEE's Computing Now platform:
http://www.computer.org/__portal/web/computingnow/__content?g=53319&type=article&__urlTitle=practically-accurate-__floating-point-math
<http://www.computer.org/portal/web/computingnow/content?g=53319&type=article&urlTitle=practically-accurate-floating-point-math>
Free access is typically limited in time, so grab your copy now!
Konrad
____________________
Racket Users list:
http://lists.racket-lang.org/__users
<http://lists.racket-lang.org/users>
____________________
Racket Users list:
http://lists.racket-lang.org/__users
<http://lists.racket-lang.org/users>
____________________
Racket Users list:
http://lists.racket-lang.org/users