Thanks Richard.
I've some stuff too, but I need to look it up. A few years ago I built
a small test spreadsheet for Gnumeric when working with Jody Goldberg.
In the early 2000s, Jody contacted R (I think Duncan Murdoch) to ask if
it was OK for Gnumeric to use R's distribution function approximati
John,
I would be happy to participate in designing the test suite you suggest.
About a year ago I revised FAQ 7.31, based on my talk at the Aalberg R
conference. It now points, in addition to the Goldberg paper that has
been referenced there for a long time, to my appendix on precision.
Here is
Yes. I should have mentioned "optimizing" compilers, and I can agree with "never
trusting exact equality", though I consider conscious use of equality tests
useful.
Optimizing compilers have bitten me once or twice. Unfortunately, a lot of
floating-point work requires attention to detail. In the
> On 23 Apr 2017, at 14:49 , J C Nash wrote:
>
>
> So equality in floating point is not always "wrong", though it should be used
> with some attention to what is going on.
>
> Apologies to those (e.g., Peter D.) who have heard this all before. I suspect
> there are many to whom it is new.
Pet
For over 4 decades I've had to put up with people changing my codes because
I use equalities of floating point numbers in tests for convergence. (Note that
tests of convergence are a subset of tests for termination -- I'll be happy to
explain that if requested.) Then I get "your program isn't work
On Apr 21, 2017 12:01 PM, "JRG" wrote:
A good part of the problem in the specific case you initially presented
is that some non-integer numbers have an exact representation in the
binary floating point arithmetic being used. Basically, if the
fractional part is of the form 1/2^k for some integer
On 04/21/2017 02:03 PM, (Ted Harding) wrote:
I've been following this thread with interest. A nice
collection of things to watch out for, if you don't
want the small arithmetic errors due to finite-length
digital representations of fractions to cause trouble!
However, as well as these small disc
I've been following this thread with interest. A nice
collection of things to watch out for, if you don't
want the small arithmetic errors due to finite-length
digital representations of fractions to cause trouble!
However, as well as these small discrepancies, major
malfunctions can also result.
A good part of the problem in the specific case you initially presented
is that some non-integer numbers have an exact representation in the
binary floating point arithmetic being used. Basically, if the
fractional part is of the form 1/2^k for some integer k > 0, there is an
exact representation
On 04/21/2017 05:19 AM, Paul Johnson wrote:
We all agree it is a problem with digital computing, not unique to R. I
don't think that is the right place to stop.
What to do? The round example arose in a real funded project where 2 R
programs differed in results and cause was that one person got
Your guideline #1 is invalid for R... compare 5L/3L to 5L %/% 3L. If you want
to avoid automatic conversion to double then you have to be cautious which
operators/functions you apply to them... merely throwing in L everywhere is not
going to help.
#2 refers to S3, but that is a completely diff
The subject is messy. I vaguely remember learning this stuff on my
first numerical analysis course over 40 years ago. The classic
reference material (much newer, only 25 years old) is:
What Every Computer Scientist Should Know About Floating-Point
Arithmetic, David Goldberg, ACM Computing Surveys,
I suggest you read some basic books on numerical analysis and/or talk
with a numerical analyst. You are (like most of us) an amateur at this
sort of thing trying to reinvent wheels. If you are concerned with
details, talk with experts. Don't assume what you don't know. This
list is *not* a reliable
We all agree it is a problem with digital computing, not unique to R. I
don't think that is the right place to stop.
What to do? The round example arose in a real funded project where 2 R
programs differed in results and cause was that one person got 57 and
another got 58. The explanation was fou
Hi
The problem is that people using Excel or probably other such spreadsheets do
not encounter this behaviour as Excel silently rounds all your calculations and
makes approximate comparison without telling it does so. Therefore most people
usually do not have any knowledge of floating point num
Also note that we see the same thing in Ruby:
irb(main):001:0> 100*(23/40)
=> 0
irb(main):002:0> 100.0*(23.0/40.0)
=> 57.49
irb(main):003:0> (100.0*23.0)/40.0
=> 57.5
and in C:
hpages@latitude:~$ cat test.c
#include
main() {
printf("%.15f\n", 100.0 * (23.0 / 4
Use all.equal(tolerance=0, aa, bb) to check for exact equality:
> aa <- 100*(23/40)
> bb <- (100*23)/40
> all.equal(aa,bb)
[1] TRUE
> all.equal(aa,bb,tolerance=0)
[1] "Mean relative difference: 1.235726e-16"
> aa < bb
[1] TRUE
The numbers there are rounded to 52 binary dig
I might add that things that *look* like integers in R are not really
integers, unless you explicitly label them as such:
> str(20)
num 20
> str(20.5)
num 20.5
> str(20L)
int 20
>
I think that Python 2 will do integer arithmetic on things that look
like integers:
$ python2
.
.
.
>>> 30 / 20
This is FAQ 7.31. It is not a bug, it is the unavoidable problem of accurately
representing floating point numbers with a finite number of bits of precision.
Look at the following:
> a <- 100*(23/40)
> b <- (100*23)/40
> print(a,digits=20)
[1] 57.493
> print(b,digits=20)
[1] 57.5
>
19 matches
Mail list logo