Dear Stavros,

Thank you very much for your helpful email and your patience.
> Perhaps you are thinking about the case where intermediate results are
> accumulated in higher-than-normal precision.  This technique only applies in
> very specialized circumstances, and it not available to user code in most
> programming languages (including R).

Ah, that's probably where I went wrong. I thought R would take the
"0.1", the "0.3", the "3", convert them to extended precision binary
representations, do its calculations, an the reduction to normal
double precision binary floats would only happen when the result was
stored or printed.
Having read your explanations now, I suspect it was unreasonable to expect that.

>  I don't know whether R's sum function
> uses this technique or some other (e.g. Kahan summation), but it does manage
> to give higher precision than summation with individual arithmetic
> operators:
>
>     sum(c(2^63,1,-2^63)) => 1
> but
>    Reduce(`+`,c(2^63,1,-2^63)) => 0

That is very interesting!

> I would suggest "What every computer scientist should know about
> floating-point arithmetic" ACM Computing Surveys 23:1 (March 1991) for the
> basics.  Anything by Kahan (http://www.cs.berkeley.edu/~wkahan/) is
> interesting.  Beyond elementary floating-point arithmetic, there is of
> course the vast field of numerical analysis, which underlies many of the
> algorithms used by R and other statistical systems.

Thank you very much for the pointers!

Best regards,
David.

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to