Re: [R] MASS::mvrnorm() on MKL may produce different numbers even when the seed is the same?

2023-08-18 Thread Martin Maechler
> Bill Dunlap 
> on Thu, 17 Aug 2023 07:31:12 -0700 writes:

> MKL's results can depend on the number of threads running and perhaps 
other
> things.  They blame it on the non-associativity of floating point
> arithmetic.  This article gives a way to make results repeatable:

> 
https://www.intel.com/content/www/us/en/developer/articles/technical/introduction-to-the-conditional-numerical-reproducibility-cnr.html

> -Bill

Thanks a lot, Bill, for your answer and this pointer to MKL creators
explanation and details on how to achieve reproducibility !

Conditional reproducibility with some cost - speedwise:

 If limited to a particular code-path, Intel MKL performance can in some 
circumstances degrade by more than half. 
This can be easily understood by noting that matrix-multiply performance nearly 
doubled with the introduction of new processors supporting AVX instructions. In 
other cases, performance may degrade by 10-20% if algorithms are restricted to 
maintain the order of operations.


It confirms what I have been claiming for about a dozen years,
that in my experience, *all* optimizing BLAS/Lapack libraries trade
speed for accuracy to some extent, and hence an unoptimized
BLAS/Lapack (as the one in R's sources), should be considered gold-standard.
{Consequently, I have not been fully happy that we switched to
 using an *external* Lapack, when one is found during configuration.
 Of course there *are* other good reasons to do such a switch,
 notably compatibility with other math software running on the same system.}

Now, in the above report, they show how to ensure
  CNR := conditional numerical reproducibility
when using MKL ("conditional" means e.g. the same level of parallelization).
on Intel compatible chips.

I think it would be nice to provide the average R user with a
(possibly super small) R package that allows to turn on (and off) such CNR
reproducibility.
Strict reproducibility when running on the same hardware and
software should be a very high desideratum for all scientists
and I hope also for all data "wranglers" etc..

Martin

--
Martin Maechler
ETH Zurich  and  R Core team

> On Wed, Aug 16, 2023 at 8:11 PM Shu Fai Cheung 
> wrote:

>> Hi All,
>> 
>> When addressing an error in one of my packages in the CRAN check at CRAN,
>> I found something strange which, I believe, is unrelated to my package.
>> I am not sure whether it is a bug or a known issue. Therefore, I would 
like
>> to have advice from experts here.
>> 
>> The error at CRAN check occurred in a test, and only on R-devel with MKL.
>> No
>> issues on all other platforms. The test compares two sets of random
>> numbers,
>> supposed to be identical because generated from the same seed. However,
>> when
>> I tried that in R-devel (2023-08-15 r84957) on Ubuntu 22.04.3 LTS, linked
>> to
>> MKL (2023.2.0), I found that this is not the case in one situation.
>> 
>> I don't know why but somehow only one particular set of means and
>> covariance matrix, among many others in the code, led to that problem.
>> Please find
>> below the code to reproduce the means and the covariance matrix (pardon 
me
>> for the long code):
>> 
>> mu0 <- structure(c(0.52252416853188655, 0.39883382931927774,
>> 1.6296642535174521,
>> 2.9045763671121816, -0.19816874840500939, 0.42610841566522556,
>> 0.30155498531316366, 1.0339601619394503, 3.4125587827873192,
>> 13.125481598475405, 19.275480386183503, 658.94225353462195,
>> 1.0997193726987393,
>> 9.9980286642877214, 6.4947188998602092, -12.952617780813679,
>> 8.4991882784024799, 106.82209520278377, 0.19623712449219838), class =
>> c("numeric"))
>> 
>> sigma0 <- structure(c(0.0047010182215945009, 1.4688424622163808e-17,
>> -2.5269697696885822e-17,
>> -1.2451516929358655e-18, 3.6553475725253408e-19, 3.2092363914356227e-19,
>> -2.6341938382508503e-18, 8.6439582878287556e-19, -1.295240602054123e-17,
>> 1.1154057497258702e-18, -8.2407621704807367e-33, -1.0891686096304908e-15,
>> -2.6800671450262819e-18, -0.0009225142979911, 
-1.4833018148344697e-16,
>> 2.292242132203e-16, -3.5128615476876035e-18, 2.8004770623734002e-33,
>> 2.0818099301348832e-34, 1.294907062174661e-17, 0.012788608283099183,
>> 1.5603789410838518e-15, 1.0425182851251724e-16, 2.758318745465066e-17,
>> -9.7047963858148572e-18, -7.4685438142667792e-17, 
-1.9426723638193857e-17,
>> -7.8775307262071738e-17, -4.0773523140359949e-17, 4.1212975037936416e-18,
>> -1.199934828824835e-14, 2.301740948367742e-17, -2.5410883836579325e-18,
>> -0.129172198695584, -1.9922957470809647e-14, 1.7077690739402761e-16,
>> 1.663702957392817e-30, 5.5362608501514495e-32, -2.5575000656645995e-17,
>> 7.3879960338026333e-16, 0.052246980157830372, -0.007670030643844231,
>> -1.5468402204507243e-17, 1.0518757786432683e-17, 2.9992131812421

Re: [R] X11 font

2023-08-18 Thread Ivan Krylov
В Wed, 16 Aug 2023 16:08:57 -0500
"Therneau, Terry M., Ph.D. via R-help"  пишет:

> I get the following error out of R, on a newer Ubuntu installation.

> ! X11 font -adobe-helvetica-%s-%s-*-*-%d-*-*-*-*-*-*-*, face 1 at
> size 12 could not be loaded

> Version:
> R Under development (unstable) (2023-08-01 r84818) -- "Unsuffered
> Consequences"
> Copyright (C) 2023 The R Foundation for Statistical Computing
> Platform: aarch64-unknown-linux-gnu

What's the output of capabilities()? It looks like you're getting
png(type = 'Xlib') instead of the nicer looking png(type = 'cairo').
When you compiled R, did you have cairo headers installed? Try running
sudo apt build-dep r-base, then running .../configure again, then
compiling R.

-- 
Best regards,
Ivan

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Numerical stability of: 1/(1 - cos(x)) - 2/x^2

2023-08-18 Thread Leonard Mada via R-help

Dear Martin,

Thank you very much for your analysis.

I add only a small comment:
- the purpose of the modified formula was to apply l'Hospital;
- there are other ways to transform the formula; although applying 
l'Hospital once is probably more robust than simple transformations (but 
the computations are also more tedious and error prone);


After more careful thinking, I believe that it is a limitation due to 
floating points:

x = 1E-4
1/(-x^2/2 - x^4/24) + 2/x^2
1/6

y = 1 - x^2/2 - x^4/24;
1/(cos(x) - 1) + 2/x^2
1/(y - 1) + 2/x^2
# -1.215494
# correct: 1/6

We need the 3rd term for the correct computation of cos(x) in this 
problem: but this is x^4 / 24, which for 1E-4 requires precision at 
least up to 1E-16 / 24, or ~ 1E-18. I did not thought initially about 
that. The trigonometric functions skip one term, and are therefore much 
uglier than the log. The problem really stems from the representation of 
1 - x^2/2 as shown below:

x = 1E-4
print(1 - x^2/2, digits=20)
print(0.5, digits=20) # fails
# 0.50003039

Maybe some functions of type cos1p and cos1n would be handy for such 
computations (to replace the manual series expansion):

cos1p(x) = 1 + cos(x)
cos1n(x) = 1 - cos(x)
Though, I do not have yet the big picture.


Sincerely,


Leonard


On 8/17/2023 1:57 PM, Martin Maechler wrote:

Leonard Mada
 on Wed, 16 Aug 2023 20:50:52 +0300 writes:

 > Dear Iris,
 > Dear Martin,

 > Thank you very much for your replies. I add a few comments.

 > 1.) Correct formula
 > The formula in the Subject Title was correct. A small glitch swept into
 > the last formula:
 > - 1/(cos(x) - 1) - 2/x^2
 > or
 > 1/(1 - cos(x)) - 2/x^2 # as in the subject title;

 > 2.) log1p
 > Actually, the log-part behaves much better. And when it fails, it fails
 > completely (which is easy to spot!).

 > x = 1E-6
 > log(x) -log(1 - cos(x))/2
 > # 0.3465291

 > x = 1E-8
 > log(x) -log(1 - cos(x))/2
 > # Inf
 > log(x) - log1p(- cos(x))/2
 > # Inf => fails as well!
 > # although using only log1p(cos(x)) seems to do the trick;
 > log1p(cos(x)); log(2)/2;

 > 3.) 1/(1 - cos(x)) - 2/x^2
 > It is possible to convert the formula to one which is numerically more
 > stable. It is also possible to compute it manually, but it involves much
 > more work and is also error prone:

 > (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x)))
 > And applying L'Hospital:
 > (2*x - 2*sin(x)) / (2*x * (1 - cos(x)) + x^2*sin(x))
 > # and a 2nd & 3rd & 4th time
 > 1/6

 > The big problem was that I did not expect it to fail for x = 1E-4. I
 > thought it is more robust and works maybe until 1E-5.
 > x = 1E-5
 > 2/x^2 - 2E+10
 > # -3.814697e-06

 > This is the reason why I believe that there is room for improvement.

 > Sincerely,
 > Leonard

Thank you, Leonard.
Yes, I agree that it is amazing how much your formula suffers from
(a generalization of) "cancellation" --- leading you to think
there was a problem with cos() or log() or .. in R.
But really R uses the system builtin libmath library, and the
problem is really the inherent instability of your formula.

Indeed your first approximation was not really much more stable:

## 3.) 1/(1 - cos(x)) - 2/x^2
## It is possible to convert the formula to one which is numerically more
## stable. It is also possible to compute it manually, but it involves much
## more work and is also error prone:
## (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x)))
## MM: but actually, that approximation does not seem better (close to the 
breakdown region):
f1 <- \(x) 1/(1 - cos(x)) - 2/x^2
f2 <- \(x) (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x)))
curve(f1, 1e-8, 1e-1, log="xy" n=2^10)
curve(f2, add = TRUE, col=2,   n=2^10)
## Zoom in:
curve(f1, 1e-4, 1e-1, log="xy",n=2^9)
curve(f2, add = TRUE, col=2,   n=2^9)
## Zoom in much more in y-direction:
yl <- 1/6 + c(-5, 20)/10
curve(f1, 1e-4, 1e-1, log="x", ylim=yl, n=2^9)
abline(h = 1/6, lty=3, col="gray")
curve(f2, add = TRUE, n=2^9, col=adjustcolor(2, 1/2))

Now, you can use the Rmpfr package (interface to the GNU MPFR
multiple-precision C library) to find out more :

if(!requireNamespace("Rmpfr")) install.packages("Rmpfr")
M <- function(x, precBits=128) Rmpfr::mpfr(x, precBits)

(xM <- M(1e-8))# yes, only ~ 16 dig accurate
## 1.20922560830128472675327e-8
M(10, 128)^-8 # would of course be more accurate,
## but we want the calculation for the double precision number 1e-8

## Now you can draw "the truth" into the above plots:
curve(f1, 1e-4, 1e-1, log="xy",n=2^9)
curve(f2, add = TRUE, col=2,   n=2^9)
## correct:
curve(f1(M(x, 256)), add = TRUE, col=4, lwd=2, n=2^9)
abline(h = 1/6, lty=3, col="gray")

But, indeed we take note  how much it is the formula instability:
Also MPFR needs a lot of extra bits precision before it gets to
the correct numbers:

xM <- c(M(1e-8,  80), M(1e-8,  96), M(1e-8, 112),
 M

Re: [R] MASS::mvrnorm() on MKL may produce different numbers even when the seed is the same?

2023-08-18 Thread Rolf Turner


On Fri, 18 Aug 2023 12:17:51 +0200
Martin Maechler  wrote:



> I think it would be nice to provide the average R user with a
> (possibly super small) R package that allows to turn on (and off)
> such CNR reproducibility.



Would it be possible to effect this on/off via options()?

cheers,

Rolf

-- 
Honorary Research Fellow
Department of Statistics
University of Auckland
Stats. Dep't. (secretaries) phone:
 +64-9-373-7599 ext. 89622
Home phone: +64-9-480-4619

__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Numerical stability of: 1/(1 - cos(x)) - 2/x^2

2023-08-18 Thread Leonard Mada via R-help

I have added some clarifications below.

On 8/18/2023 10:20 PM, Leonard Mada wrote:

[...]
After more careful thinking, I believe that it is a limitation due to 
floating points:

[...]

The problem really stems from the representation of 1 - x^2/2 as shown 
below:

x = 1E-4
print(1 - x^2/2, digits=20)
print(0.5, digits=20) # fails
# 0.50003039


The floating point representation of 1 - x^2/2 is the real culprit:
# 0.50003039

The 3039 at the end is really an error due to the floating point 
representation. However, this error blows up when inverting the value:

x = 1E-4;
y = 1 - x^2/2;
1/(1 - y) - 2/x^2
# 1.215494
# should be 1/(x^2/2) - 2/x^2 = 0


The ugly thing is that the error only gets worse as x decreases. The 
value neither drops to 0, nor does it blow up to infinity; but it gets 
worse in a continuous manner. At least the reason has become now clear.





Maybe some functions of type cos1p and cos1n would be handy for such 
computations (to replace the manual series expansion):

cos1p(x) = 1 + cos(x)
cos1n(x) = 1 - cos(x)
Though, I do not have yet the big picture.



Sincerely,


Leonard




On 8/17/2023 1:57 PM, Martin Maechler wrote:

Leonard Mada
 on Wed, 16 Aug 2023 20:50:52 +0300 writes:

 > Dear Iris,
 > Dear Martin,

 > Thank you very much for your replies. I add a few comments.

 > 1.) Correct formula
 > The formula in the Subject Title was correct. A small glitch 
swept into

 > the last formula:
 > - 1/(cos(x) - 1) - 2/x^2
 > or
 > 1/(1 - cos(x)) - 2/x^2 # as in the subject title;

 > 2.) log1p
 > Actually, the log-part behaves much better. And when it fails, 
it fails

 > completely (which is easy to spot!).

 > x = 1E-6
 > log(x) -log(1 - cos(x))/2
 > # 0.3465291

 > x = 1E-8
 > log(x) -log(1 - cos(x))/2
 > # Inf
 > log(x) - log1p(- cos(x))/2
 > # Inf => fails as well!
 > # although using only log1p(cos(x)) seems to do the trick;
 > log1p(cos(x)); log(2)/2;

 > 3.) 1/(1 - cos(x)) - 2/x^2
 > It is possible to convert the formula to one which is 
numerically more
 > stable. It is also possible to compute it manually, but it 
involves much

 > more work and is also error prone:

 > (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x)))
 > And applying L'Hospital:
 > (2*x - 2*sin(x)) / (2*x * (1 - cos(x)) + x^2*sin(x))
 > # and a 2nd & 3rd & 4th time
 > 1/6

 > The big problem was that I did not expect it to fail for x = 
1E-4. I

 > thought it is more robust and works maybe until 1E-5.
 > x = 1E-5
 > 2/x^2 - 2E+10
 > # -3.814697e-06

 > This is the reason why I believe that there is room for 
improvement.


 > Sincerely,
 > Leonard

Thank you, Leonard.
Yes, I agree that it is amazing how much your formula suffers from
(a generalization of) "cancellation" --- leading you to think
there was a problem with cos() or log() or .. in R.
But really R uses the system builtin libmath library, and the
problem is really the inherent instability of your formula.

Indeed your first approximation was not really much more stable:

## 3.) 1/(1 - cos(x)) - 2/x^2
## It is possible to convert the formula to one which is numerically 
more
## stable. It is also possible to compute it manually, but it 
involves much

## more work and is also error prone:
## (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x)))
## MM: but actually, that approximation does not seem better (close 
to the breakdown region):

f1 <- \(x) 1/(1 - cos(x)) - 2/x^2
f2 <- \(x) (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x)))
curve(f1, 1e-8, 1e-1, log="xy" n=2^10)
curve(f2, add = TRUE, col=2,   n=2^10)
## Zoom in:
curve(f1, 1e-4, 1e-1, log="xy",n=2^9)
curve(f2, add = TRUE, col=2,   n=2^9)
## Zoom in much more in y-direction:
yl <- 1/6 + c(-5, 20)/10
curve(f1, 1e-4, 1e-1, log="x", ylim=yl, n=2^9)
abline(h = 1/6, lty=3, col="gray")
curve(f2, add = TRUE, n=2^9, col=adjustcolor(2, 1/2))

Now, you can use the Rmpfr package (interface to the GNU MPFR
multiple-precision C library) to find out more :

if(!requireNamespace("Rmpfr")) install.packages("Rmpfr")
M <- function(x, precBits=128) Rmpfr::mpfr(x, precBits)

(xM <- M(1e-8))# yes, only ~ 16 dig accurate
## 1.20922560830128472675327e-8
M(10, 128)^-8 # would of course be more accurate,
## but we want the calculation for the double precision number 1e-8

## Now you can draw "the truth" into the above plots:
curve(f1, 1e-4, 1e-1, log="xy",n=2^9)
curve(f2, add = TRUE, col=2,   n=2^9)
## correct:
curve(f1(M(x, 256)), add = TRUE, col=4, lwd=2, n=2^9)
abline(h = 1/6, lty=3, col="gray")

But, indeed we take note  how much it is the formula instability:
Also MPFR needs a lot of extra bits precision before it gets to
the correct numbers:

xM <- c(M(1e-8,  80), M(1e-8,  96), M(1e-8, 112),
 M(1e-8, 128), M(1e-8, 180), M(1e-8, 256))
## to and round back to 70 bits for display:
R <- \(x) Rmpfr::roundMpfr(x, 70)
R(f1(

Re: [R] Numerical stability of: 1/(1 - cos(x)) - 2/x^2

2023-08-18 Thread Bert Gunter
"The ugly thing is that the error only gets worse as x decreases. The
value neither drops to 0, nor does it blow up to infinity; but it gets
worse in a continuous manner."

If I understand you correctly, this is wrong:

> x <- 2^(-20) ## considerably less then 1e-4 !!
> y <- 1 - x^2/2;
> 1/(1 - y) - 2/x^2
[1] 0

It's all about the accuracy of the binary approximation of floating point
numbers (and their arithmetic)

Cheers,
Bert


On Fri, Aug 18, 2023 at 3:25 PM Leonard Mada via R-help <
r-help@r-project.org> wrote:

> I have added some clarifications below.
>
> On 8/18/2023 10:20 PM, Leonard Mada wrote:
> > [...]
> > After more careful thinking, I believe that it is a limitation due to
> > floating points:
> > [...]
> >
> > The problem really stems from the representation of 1 - x^2/2 as shown
> > below:
> > x = 1E-4
> > print(1 - x^2/2, digits=20)
> > print(0.5, digits=20) # fails
> > # 0.50003039
>
> The floating point representation of 1 - x^2/2 is the real culprit:
> # 0.50003039
>
> The 3039 at the end is really an error due to the floating point
> representation. However, this error blows up when inverting the value:
> x = 1E-4;
> y = 1 - x^2/2;
> 1/(1 - y) - 2/x^2
> # 1.215494
> # should be 1/(x^2/2) - 2/x^2 = 0
>
>
> The ugly thing is that the error only gets worse as x decreases. The
> value neither drops to 0, nor does it blow up to infinity; but it gets
> worse in a continuous manner. At least the reason has become now clear.
>
>
> >
> > Maybe some functions of type cos1p and cos1n would be handy for such
> > computations (to replace the manual series expansion):
> > cos1p(x) = 1 + cos(x)
> > cos1n(x) = 1 - cos(x)
> > Though, I do not have yet the big picture.
> >
>
> Sincerely,
>
>
> Leonard
>
> >
> >
> > On 8/17/2023 1:57 PM, Martin Maechler wrote:
> >>> Leonard Mada
> >>>  on Wed, 16 Aug 2023 20:50:52 +0300 writes:
> >>  > Dear Iris,
> >>  > Dear Martin,
> >>
> >>  > Thank you very much for your replies. I add a few comments.
> >>
> >>  > 1.) Correct formula
> >>  > The formula in the Subject Title was correct. A small glitch
> >> swept into
> >>  > the last formula:
> >>  > - 1/(cos(x) - 1) - 2/x^2
> >>  > or
> >>  > 1/(1 - cos(x)) - 2/x^2 # as in the subject title;
> >>
> >>  > 2.) log1p
> >>  > Actually, the log-part behaves much better. And when it fails,
> >> it fails
> >>  > completely (which is easy to spot!).
> >>
> >>  > x = 1E-6
> >>  > log(x) -log(1 - cos(x))/2
> >>  > # 0.3465291
> >>
> >>  > x = 1E-8
> >>  > log(x) -log(1 - cos(x))/2
> >>  > # Inf
> >>  > log(x) - log1p(- cos(x))/2
> >>  > # Inf => fails as well!
> >>  > # although using only log1p(cos(x)) seems to do the trick;
> >>  > log1p(cos(x)); log(2)/2;
> >>
> >>  > 3.) 1/(1 - cos(x)) - 2/x^2
> >>  > It is possible to convert the formula to one which is
> >> numerically more
> >>  > stable. It is also possible to compute it manually, but it
> >> involves much
> >>  > more work and is also error prone:
> >>
> >>  > (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x)))
> >>  > And applying L'Hospital:
> >>  > (2*x - 2*sin(x)) / (2*x * (1 - cos(x)) + x^2*sin(x))
> >>  > # and a 2nd & 3rd & 4th time
> >>  > 1/6
> >>
> >>  > The big problem was that I did not expect it to fail for x =
> >> 1E-4. I
> >>  > thought it is more robust and works maybe until 1E-5.
> >>  > x = 1E-5
> >>  > 2/x^2 - 2E+10
> >>  > # -3.814697e-06
> >>
> >>  > This is the reason why I believe that there is room for
> >> improvement.
> >>
> >>  > Sincerely,
> >>  > Leonard
> >>
> >> Thank you, Leonard.
> >> Yes, I agree that it is amazing how much your formula suffers from
> >> (a generalization of) "cancellation" --- leading you to think
> >> there was a problem with cos() or log() or .. in R.
> >> But really R uses the system builtin libmath library, and the
> >> problem is really the inherent instability of your formula.
> >>
> >> Indeed your first approximation was not really much more stable:
> >>
> >> ## 3.) 1/(1 - cos(x)) - 2/x^2
> >> ## It is possible to convert the formula to one which is numerically
> >> more
> >> ## stable. It is also possible to compute it manually, but it
> >> involves much
> >> ## more work and is also error prone:
> >> ## (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x)))
> >> ## MM: but actually, that approximation does not seem better (close
> >> to the breakdown region):
> >> f1 <- \(x) 1/(1 - cos(x)) - 2/x^2
> >> f2 <- \(x) (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x)))
> >> curve(f1, 1e-8, 1e-1, log="xy" n=2^10)
> >> curve(f2, add = TRUE, col=2,   n=2^10)
> >> ## Zoom in:
> >> curve(f1, 1e-4, 1e-1, log="xy",n=2^9)
> >> curve(f2, add = TRUE, col=2,   n=2^9)
> >> ## Zoom in much more in y-direction:
> >> yl <- 1/6 + c(-5, 20)/10
> >> curve(f1, 1e-4, 1e-1, log="x", ylim=yl, n=2^9)
> >> abline(h = 1/6, lty=3, col="gray")
> >> curve(f2, add =

Re: [R] Numerical stability of: 1/(1 - cos(x)) - 2/x^2

2023-08-18 Thread Leonard Mada via R-help
Dear Bert,


Values of type 2^(-n) (and its binary complement) are exactly 
represented as floating point numbers and do not generate the error. 
However, values away from such special x-values will generate errors:


# exactly represented:
x = 9.53674316406250e-07
y <- 1 - x^2/2;
1/(1 - y) - 2/x^2

# almost exact:
x = 9.536743164062502e-07
y <- 1 - x^2/2;
1/(1 - y) - 2/x^2

x = 9.536743164062498e-07
y <- 1 - x^2/2;
1/(1 - y) - 2/x^2

# the result behaves far better around values
# which can be represented exactly,
# but fails drastically for other values!
x = 2^(-20) * 1.1
y <- 1 - x^2/2;
1/(1 - y) - 2/x^2
# 58672303 instead of 0!


Sincerely,


Leonard


On 8/19/2023 2:06 AM, Bert Gunter wrote:
> "The ugly thing is that the error only gets worse as x decreases. The
> value neither drops to 0, nor does it blow up to infinity; but it gets
> worse in a continuous manner."
>
> If I understand you correctly, this is wrong:
>
> > x <- 2^(-20) ## considerably less then 1e-4 !!
> > y <- 1 - x^2/2;
> > 1/(1 - y) - 2/x^2
> [1] 0
>
> It's all about the accuracy of the binary approximation of floating 
> point numbers (and their arithmetic)
>
> Cheers,
> Bert
>
>
> On Fri, Aug 18, 2023 at 3:25 PM Leonard Mada via R-help 
>  wrote:
>
> I have added some clarifications below.
>
> On 8/18/2023 10:20 PM, Leonard Mada wrote:
> > [...]
> > After more careful thinking, I believe that it is a limitation
> due to
> > floating points:
> > [...]
> >
> > The problem really stems from the representation of 1 - x^2/2 as
> shown
> > below:
> > x = 1E-4
> > print(1 - x^2/2, digits=20)
> > print(0.5, digits=20) # fails
> > # 0.50003039
>
> The floating point representation of 1 - x^2/2 is the real culprit:
> # 0.50003039
>
> The 3039 at the end is really an error due to the floating point
> representation. However, this error blows up when inverting the value:
> x = 1E-4;
> y = 1 - x^2/2;
> 1/(1 - y) - 2/x^2
> # 1.215494
> # should be 1/(x^2/2) - 2/x^2 = 0
>
>
> The ugly thing is that the error only gets worse as x decreases. The
> value neither drops to 0, nor does it blow up to infinity; but it
> gets
> worse in a continuous manner. At least the reason has become now
> clear.
>
>
> >
> > Maybe some functions of type cos1p and cos1n would be handy for
> such
> > computations (to replace the manual series expansion):
> > cos1p(x) = 1 + cos(x)
> > cos1n(x) = 1 - cos(x)
> > Though, I do not have yet the big picture.
> >
>
> Sincerely,
>
>
> Leonard
>
> >
> >
> > On 8/17/2023 1:57 PM, Martin Maechler wrote:
> >>> Leonard Mada
> >>>  on Wed, 16 Aug 2023 20:50:52 +0300 writes:
> >>  > Dear Iris,
> >>  > Dear Martin,
> >>
> >>  > Thank you very much for your replies. I add a few comments.
> >>
> >>  > 1.) Correct formula
> >>  > The formula in the Subject Title was correct. A small
> glitch
> >> swept into
> >>  > the last formula:
> >>  > - 1/(cos(x) - 1) - 2/x^2
> >>  > or
> >>  > 1/(1 - cos(x)) - 2/x^2 # as in the subject title;
> >>
> >>  > 2.) log1p
> >>  > Actually, the log-part behaves much better. And when it
> fails,
> >> it fails
> >>  > completely (which is easy to spot!).
> >>
> >>  > x = 1E-6
> >>  > log(x) -log(1 - cos(x))/2
> >>  > # 0.3465291
> >>
> >>  > x = 1E-8
> >>  > log(x) -log(1 - cos(x))/2
> >>  > # Inf
> >>  > log(x) - log1p(- cos(x))/2
> >>  > # Inf => fails as well!
> >>  > # although using only log1p(cos(x)) seems to do the trick;
> >>  > log1p(cos(x)); log(2)/2;
> >>
> >>  > 3.) 1/(1 - cos(x)) - 2/x^2
> >>  > It is possible to convert the formula to one which is
> >> numerically more
> >>  > stable. It is also possible to compute it manually, but it
> >> involves much
> >>  > more work and is also error prone:
> >>
> >>  > (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x)))
> >>  > And applying L'Hospital:
> >>  > (2*x - 2*sin(x)) / (2*x * (1 - cos(x)) + x^2*sin(x))
> >>  > # and a 2nd & 3rd & 4th time
> >>  > 1/6
> >>
> >>  > The big problem was that I did not expect it to fail for
> x =
> >> 1E-4. I
> >>  > thought it is more robust and works maybe until 1E-5.
> >>  > x = 1E-5
> >>  > 2/x^2 - 2E+10
> >>  > # -3.814697e-06
> >>
> >>  > This is the reason why I believe that there is room for
> >> improvement.
> >>
> >>  > Sincerely,
> >>  > Leonard
> >>
> >> Thank you, Leonard.
> >> Yes, I agree that it is amazing how much your formula suffers from
> >> (a generalization of) "cancellation" --- leading you 

Re: [R] Numerical stability of: 1/(1 - cos(x)) - 2/x^2

2023-08-18 Thread Bert Gunter
"Values of type 2^(-n) (and its binary complement) are exactly represented
as floating point numbers and do not generate the error. However, values
away from such special x-values will generate errors:"

That was exactly my point: The size of errors depends on the accuracy of
binary representation of floating point numbers and their arithmetic.

But you previously said:
"The ugly thing is that the error only gets worse as x decreases. The
value neither drops to 0, nor does it blow up to infinity; but it gets
worse in a continuous manner."

That is wrong and disagrees with what you say above.

-- Bert

On Fri, Aug 18, 2023 at 4:34 PM Leonard Mada  wrote:

> Dear Bert,
>
>
> Values of type 2^(-n) (and its binary complement) are exactly represented
> as floating point numbers and do not generate the error. However, values
> away from such special x-values will generate errors:
>
>
> # exactly represented:
> x = 9.53674316406250e-07
> y <- 1 - x^2/2;
> 1/(1 - y) - 2/x^2
>
> # almost exact:
> x = 9.536743164062502e-07
> y <- 1 - x^2/2;
> 1/(1 - y) - 2/x^2
>
> x = 9.536743164062498e-07
> y <- 1 - x^2/2;
> 1/(1 - y) - 2/x^2
>
> # the result behaves far better around values
> # which can be represented exactly,
> # but fails drastically for other values!
> x = 2^(-20) * 1.1
> y <- 1 - x^2/2;
> 1/(1 - y) - 2/x^2
> # 58672303 instead of 0!
>
>
> Sincerely,
>
>
> Leonard
>
>
> On 8/19/2023 2:06 AM, Bert Gunter wrote:
>
> "The ugly thing is that the error only gets worse as x decreases. The
> value neither drops to 0, nor does it blow up to infinity; but it gets
> worse in a continuous manner."
>
> If I understand you correctly, this is wrong:
>
> > x <- 2^(-20) ## considerably less then 1e-4 !!
> > y <- 1 - x^2/2;
> > 1/(1 - y) - 2/x^2
> [1] 0
>
> It's all about the accuracy of the binary approximation of floating point
> numbers (and their arithmetic)
>
> Cheers,
> Bert
>
>
> On Fri, Aug 18, 2023 at 3:25 PM Leonard Mada via R-help <
> r-help@r-project.org> wrote:
>
>> I have added some clarifications below.
>>
>> On 8/18/2023 10:20 PM, Leonard Mada wrote:
>> > [...]
>> > After more careful thinking, I believe that it is a limitation due to
>> > floating points:
>> > [...]
>> >
>> > The problem really stems from the representation of 1 - x^2/2 as shown
>> > below:
>> > x = 1E-4
>> > print(1 - x^2/2, digits=20)
>> > print(0.5, digits=20) # fails
>> > # 0.50003039
>>
>> The floating point representation of 1 - x^2/2 is the real culprit:
>> # 0.50003039
>>
>> The 3039 at the end is really an error due to the floating point
>> representation. However, this error blows up when inverting the value:
>> x = 1E-4;
>> y = 1 - x^2/2;
>> 1/(1 - y) - 2/x^2
>> # 1.215494
>> # should be 1/(x^2/2) - 2/x^2 = 0
>>
>>
>> The ugly thing is that the error only gets worse as x decreases. The
>> value neither drops to 0, nor does it blow up to infinity; but it gets
>> worse in a continuous manner. At least the reason has become now clear.
>>
>>
>> >
>> > Maybe some functions of type cos1p and cos1n would be handy for such
>> > computations (to replace the manual series expansion):
>> > cos1p(x) = 1 + cos(x)
>> > cos1n(x) = 1 - cos(x)
>> > Though, I do not have yet the big picture.
>> >
>>
>> Sincerely,
>>
>>
>> Leonard
>>
>> >
>> >
>> > On 8/17/2023 1:57 PM, Martin Maechler wrote:
>> >>> Leonard Mada
>> >>>  on Wed, 16 Aug 2023 20:50:52 +0300 writes:
>> >>  > Dear Iris,
>> >>  > Dear Martin,
>> >>
>> >>  > Thank you very much for your replies. I add a few comments.
>> >>
>> >>  > 1.) Correct formula
>> >>  > The formula in the Subject Title was correct. A small glitch
>> >> swept into
>> >>  > the last formula:
>> >>  > - 1/(cos(x) - 1) - 2/x^2
>> >>  > or
>> >>  > 1/(1 - cos(x)) - 2/x^2 # as in the subject title;
>> >>
>> >>  > 2.) log1p
>> >>  > Actually, the log-part behaves much better. And when it fails,
>> >> it fails
>> >>  > completely (which is easy to spot!).
>> >>
>> >>  > x = 1E-6
>> >>  > log(x) -log(1 - cos(x))/2
>> >>  > # 0.3465291
>> >>
>> >>  > x = 1E-8
>> >>  > log(x) -log(1 - cos(x))/2
>> >>  > # Inf
>> >>  > log(x) - log1p(- cos(x))/2
>> >>  > # Inf => fails as well!
>> >>  > # although using only log1p(cos(x)) seems to do the trick;
>> >>  > log1p(cos(x)); log(2)/2;
>> >>
>> >>  > 3.) 1/(1 - cos(x)) - 2/x^2
>> >>  > It is possible to convert the formula to one which is
>> >> numerically more
>> >>  > stable. It is also possible to compute it manually, but it
>> >> involves much
>> >>  > more work and is also error prone:
>> >>
>> >>  > (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x)))
>> >>  > And applying L'Hospital:
>> >>  > (2*x - 2*sin(x)) / (2*x * (1 - cos(x)) + x^2*sin(x))
>> >>  > # and a 2nd & 3rd & 4th time
>> >>  > 1/6
>> >>
>> >>  > The big problem was that I did not expect it to fail for x =
>> >> 1E-4. I
>> >>  > thought it is more robust

Re: [R] Numerical stability of: 1/(1 - cos(x)) - 2/x^2

2023-08-18 Thread Leonard Mada via R-help
Dear Bert,


On 8/19/2023 2:47 AM, Bert Gunter wrote:
> "Values of type 2^(-n) (and its binary complement) are exactly 
> represented as floating point numbers and do not generate the error. 
> However, values away from such special x-values will generate errors:"
>
> That was exactly my point: The size of errors depends on the accuracy 
> of binary representation of floating point numbers and their arithmetic.
>
> But you previously said:
> "The ugly thing is that the error only gets worse as x decreases. The
> value neither drops to 0, nor does it blow up to infinity; but it gets
> worse in a continuous manner."
>
> That is wrong and disagrees with what you say above.
>
> -- Bert


On "average", the error increases. But it does NOT increase monotonically:

  x = 2^(-20) * 1.1 # is still relatively close to the exact value!
y <- 1 - x^2/2;
1/(1 - y) - 2/x^2
# 58672303, not 0, nor close to 0;


Sincerely,


Leonard


>
> On Fri, Aug 18, 2023 at 4:34 PM Leonard Mada  wrote:
>
> Dear Bert,
>
>
> Values of type 2^(-n) (and its binary complement) are exactly
> represented as floating point numbers and do not generate the
> error. However, values away from such special x-values will
> generate errors:
>
>
> # exactly represented:
> x = 9.53674316406250e-07
> y <- 1 - x^2/2;
> 1/(1 - y) - 2/x^2
>
> # almost exact:
> x = 9.536743164062502e-07
> y <- 1 - x^2/2;
> 1/(1 - y) - 2/x^2
>
> x = 9.536743164062498e-07
> y <- 1 - x^2/2;
> 1/(1 - y) - 2/x^2
>
> # the result behaves far better around values
> # which can be represented exactly,
> # but fails drastically for other values!
> x = 2^(-20) * 1.1
> y <- 1 - x^2/2;
> 1/(1 - y) - 2/x^2
> # 58672303 instead of 0!
>
>
> Sincerely,
>
>
> Leonard
>
>
> On 8/19/2023 2:06 AM, Bert Gunter wrote:
>> "The ugly thing is that the error only gets worse as x decreases.
>> The
>> value neither drops to 0, nor does it blow up to infinity; but it
>> gets
>> worse in a continuous manner."
>>
>> If I understand you correctly, this is wrong:
>>
>> > x <- 2^(-20) ## considerably less then 1e-4 !!
>> > y <- 1 - x^2/2;
>> > 1/(1 - y) - 2/x^2
>> [1] 0
>>
>> It's all about the accuracy of the binary approximation of
>> floating point numbers (and their arithmetic)
>>
>> Cheers,
>> Bert
>>
>>
>> On Fri, Aug 18, 2023 at 3:25 PM Leonard Mada via R-help
>>  wrote:
>>
>> I have added some clarifications below.
>>
>> On 8/18/2023 10:20 PM, Leonard Mada wrote:
>> > [...]
>> > After more careful thinking, I believe that it is a
>> limitation due to
>> > floating points:
>> > [...]
>> >
>> > The problem really stems from the representation of 1 -
>> x^2/2 as shown
>> > below:
>> > x = 1E-4
>> > print(1 - x^2/2, digits=20)
>> > print(0.5, digits=20) # fails
>> > # 0.50003039
>>
>> The floating point representation of 1 - x^2/2 is the real
>> culprit:
>> # 0.50003039
>>
>> The 3039 at the end is really an error due to the floating point
>> representation. However, this error blows up when inverting
>> the value:
>> x = 1E-4;
>> y = 1 - x^2/2;
>> 1/(1 - y) - 2/x^2
>> # 1.215494
>> # should be 1/(x^2/2) - 2/x^2 = 0
>>
>>
>> The ugly thing is that the error only gets worse as x
>> decreases. The
>> value neither drops to 0, nor does it blow up to infinity;
>> but it gets
>> worse in a continuous manner. At least the reason has become
>> now clear.
>>
>>
>> >
>> > Maybe some functions of type cos1p and cos1n would be handy
>> for such
>> > computations (to replace the manual series expansion):
>> > cos1p(x) = 1 + cos(x)
>> > cos1n(x) = 1 - cos(x)
>> > Though, I do not have yet the big picture.
>> >
>>
>> Sincerely,
>>
>>
>> Leonard
>>
>> >
>> >
>> > On 8/17/2023 1:57 PM, Martin Maechler wrote:
>> >>> Leonard Mada
>> >>>  on Wed, 16 Aug 2023 20:50:52 +0300 writes:
>> >>  > Dear Iris,
>> >>  > Dear Martin,
>> >>
>> >>  > Thank you very much for your replies. I add a few
>> comments.
>> >>
>> >>  > 1.) Correct formula
>> >>  > The formula in the Subject Title was correct. A
>> small glitch
>> >> swept into
>> >>  > the last formula:
>> >>  > - 1/(cos(x) - 1) - 2/x^2
>> >>  > or
>> >>  > 1/(1 - cos(x)) - 2/x^2 # as in the subject title;
>> >>
>> >>  > 2.) log1p
>> >>  > Actually, the log-part behaves much better. And
>> when it fails,
>> >

Re: [R] Numerical stability of: 1/(1 - cos(x)) - 2/x^2

2023-08-18 Thread avi.e.gross
This discussion is sooo familiar.

If you want indefinite precision arithmetic, feel free to use a language and 
data type that supports it.

Otherwise, only do calculations that fit in a safe zone.

This is not just about this scenario. Floating point can work well when adding 
(or subtracting) two numbers of about the same size. But if one number is 
.123456789... and another is the same except raised to the -45th power of ten, 
then adding them effectively throws away the second number.

This is a well-known problem for any finite binary representation of numbers. 
In the example given, yes, the smaller the number is, the worse the behavior in 
this case tends to be.

There are many solutions and some are fairly expensive in terms of computation 
time and sometimes memory usage. 

Are there any good indefinite (or much higher) precision packages out there 
that would not only support the data type needed but also properly be used and 
passed along to the functions used to do complex calculations? No, I am not 
asking for indefinite precision complex numbers, but generally that would be a 
tuple of such numbers.


-Original Message-
From: R-help  On Behalf Of Bert Gunter
Sent: Friday, August 18, 2023 7:06 PM
To: Leonard Mada 
Cc: R-help Mailing List ; Martin Maechler 

Subject: Re: [R] Numerical stability of: 1/(1 - cos(x)) - 2/x^2

"The ugly thing is that the error only gets worse as x decreases. The
value neither drops to 0, nor does it blow up to infinity; but it gets
worse in a continuous manner."

If I understand you correctly, this is wrong:

> x <- 2^(-20) ## considerably less then 1e-4 !!
> y <- 1 - x^2/2;
> 1/(1 - y) - 2/x^2
[1] 0

It's all about the accuracy of the binary approximation of floating point
numbers (and their arithmetic)

Cheers,
Bert


On Fri, Aug 18, 2023 at 3:25 PM Leonard Mada via R-help <
r-help@r-project.org> wrote:

> I have added some clarifications below.
>
> On 8/18/2023 10:20 PM, Leonard Mada wrote:
> > [...]
> > After more careful thinking, I believe that it is a limitation due to
> > floating points:
> > [...]
> >
> > The problem really stems from the representation of 1 - x^2/2 as shown
> > below:
> > x = 1E-4
> > print(1 - x^2/2, digits=20)
> > print(0.5, digits=20) # fails
> > # 0.50003039
>
> The floating point representation of 1 - x^2/2 is the real culprit:
> # 0.50003039
>
> The 3039 at the end is really an error due to the floating point
> representation. However, this error blows up when inverting the value:
> x = 1E-4;
> y = 1 - x^2/2;
> 1/(1 - y) - 2/x^2
> # 1.215494
> # should be 1/(x^2/2) - 2/x^2 = 0
>
>
> The ugly thing is that the error only gets worse as x decreases. The
> value neither drops to 0, nor does it blow up to infinity; but it gets
> worse in a continuous manner. At least the reason has become now clear.
>
>
> >
> > Maybe some functions of type cos1p and cos1n would be handy for such
> > computations (to replace the manual series expansion):
> > cos1p(x) = 1 + cos(x)
> > cos1n(x) = 1 - cos(x)
> > Though, I do not have yet the big picture.
> >
>
> Sincerely,
>
>
> Leonard
>
> >
> >
> > On 8/17/2023 1:57 PM, Martin Maechler wrote:
> >>> Leonard Mada
> >>>  on Wed, 16 Aug 2023 20:50:52 +0300 writes:
> >>  > Dear Iris,
> >>  > Dear Martin,
> >>
> >>  > Thank you very much for your replies. I add a few comments.
> >>
> >>  > 1.) Correct formula
> >>  > The formula in the Subject Title was correct. A small glitch
> >> swept into
> >>  > the last formula:
> >>  > - 1/(cos(x) - 1) - 2/x^2
> >>  > or
> >>  > 1/(1 - cos(x)) - 2/x^2 # as in the subject title;
> >>
> >>  > 2.) log1p
> >>  > Actually, the log-part behaves much better. And when it fails,
> >> it fails
> >>  > completely (which is easy to spot!).
> >>
> >>  > x = 1E-6
> >>  > log(x) -log(1 - cos(x))/2
> >>  > # 0.3465291
> >>
> >>  > x = 1E-8
> >>  > log(x) -log(1 - cos(x))/2
> >>  > # Inf
> >>  > log(x) - log1p(- cos(x))/2
> >>  > # Inf => fails as well!
> >>  > # although using only log1p(cos(x)) seems to do the trick;
> >>  > log1p(cos(x)); log(2)/2;
> >>
> >>  > 3.) 1/(1 - cos(x)) - 2/x^2
> >>  > It is possible to convert the formula to one which is
> >> numerically more
> >>  > stable. It is also possible to compute it manually, but it
> >> involves much
> >>  > more work and is also error prone:
> >>
> >>  > (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x)))
> >>  > And applying L'Hospital:
> >>  > (2*x - 2*sin(x)) / (2*x * (1 - cos(x)) + x^2*sin(x))
> >>  > # and a 2nd & 3rd & 4th time
> >>  > 1/6
> >>
> >>  > The big problem was that I did not expect it to fail for x =
> >> 1E-4. I
> >>  > thought it is more robust and works maybe until 1E-5.
> >>  > x = 1E-5
> >>  > 2/x^2 - 2E+10
> >>  > # -3.814697e-06
> >>
> >>  > This is the reason why I believe that there is room for
> >> improvement.
> >

Re: [R] Numerical stability of: 1/(1 - cos(x)) - 2/x^2

2023-08-18 Thread Bert Gunter
"Are there any good indefinite (or much higher) precision packages"

A simple search on "arbitrary precision arithmetic in R" would have
immediately gotten you to the Rmpfr package.

See also:
https://cran.r-project.org/web/packages/Ryacas/vignettes/arbitrary-precision.html

-- Bert

On Fri, Aug 18, 2023 at 4:58 PM  wrote:

> This discussion is sooo familiar.
>
> If you want indefinite precision arithmetic, feel free to use a language
> and data type that supports it.
>
> Otherwise, only do calculations that fit in a safe zone.
>
> This is not just about this scenario. Floating point can work well when
> adding (or subtracting) two numbers of about the same size. But if one
> number is .123456789... and another is the same except raised to the -45th
> power of ten, then adding them effectively throws away the second number.
>
> This is a well-known problem for any finite binary representation of
> numbers. In the example given, yes, the smaller the number is, the worse
> the behavior in this case tends to be.
>
> There are many solutions and some are fairly expensive in terms of
> computation time and sometimes memory usage.
>
> Are there any good indefinite (or much higher) precision packages out
> there that would not only support the data type needed but also properly be
> used and passed along to the functions used to do complex calculations? No,
> I am not asking for indefinite precision complex numbers, but generally
> that would be a tuple of such numbers.
>
>
> -Original Message-
> From: R-help  On Behalf Of Bert Gunter
> Sent: Friday, August 18, 2023 7:06 PM
> To: Leonard Mada 
> Cc: R-help Mailing List ; Martin Maechler <
> maech...@stat.math.ethz.ch>
> Subject: Re: [R] Numerical stability of: 1/(1 - cos(x)) - 2/x^2
>
> "The ugly thing is that the error only gets worse as x decreases. The
> value neither drops to 0, nor does it blow up to infinity; but it gets
> worse in a continuous manner."
>
> If I understand you correctly, this is wrong:
>
> > x <- 2^(-20) ## considerably less then 1e-4 !!
> > y <- 1 - x^2/2;
> > 1/(1 - y) - 2/x^2
> [1] 0
>
> It's all about the accuracy of the binary approximation of floating point
> numbers (and their arithmetic)
>
> Cheers,
> Bert
>
>
> On Fri, Aug 18, 2023 at 3:25 PM Leonard Mada via R-help <
> r-help@r-project.org> wrote:
>
> > I have added some clarifications below.
> >
> > On 8/18/2023 10:20 PM, Leonard Mada wrote:
> > > [...]
> > > After more careful thinking, I believe that it is a limitation due to
> > > floating points:
> > > [...]
> > >
> > > The problem really stems from the representation of 1 - x^2/2 as shown
> > > below:
> > > x = 1E-4
> > > print(1 - x^2/2, digits=20)
> > > print(0.5, digits=20) # fails
> > > # 0.50003039
> >
> > The floating point representation of 1 - x^2/2 is the real culprit:
> > # 0.50003039
> >
> > The 3039 at the end is really an error due to the floating point
> > representation. However, this error blows up when inverting the value:
> > x = 1E-4;
> > y = 1 - x^2/2;
> > 1/(1 - y) - 2/x^2
> > # 1.215494
> > # should be 1/(x^2/2) - 2/x^2 = 0
> >
> >
> > The ugly thing is that the error only gets worse as x decreases. The
> > value neither drops to 0, nor does it blow up to infinity; but it gets
> > worse in a continuous manner. At least the reason has become now clear.
> >
> >
> > >
> > > Maybe some functions of type cos1p and cos1n would be handy for such
> > > computations (to replace the manual series expansion):
> > > cos1p(x) = 1 + cos(x)
> > > cos1n(x) = 1 - cos(x)
> > > Though, I do not have yet the big picture.
> > >
> >
> > Sincerely,
> >
> >
> > Leonard
> >
> > >
> > >
> > > On 8/17/2023 1:57 PM, Martin Maechler wrote:
> > >>> Leonard Mada
> > >>>  on Wed, 16 Aug 2023 20:50:52 +0300 writes:
> > >>  > Dear Iris,
> > >>  > Dear Martin,
> > >>
> > >>  > Thank you very much for your replies. I add a few comments.
> > >>
> > >>  > 1.) Correct formula
> > >>  > The formula in the Subject Title was correct. A small glitch
> > >> swept into
> > >>  > the last formula:
> > >>  > - 1/(cos(x) - 1) - 2/x^2
> > >>  > or
> > >>  > 1/(1 - cos(x)) - 2/x^2 # as in the subject title;
> > >>
> > >>  > 2.) log1p
> > >>  > Actually, the log-part behaves much better. And when it fails,
> > >> it fails
> > >>  > completely (which is easy to spot!).
> > >>
> > >>  > x = 1E-6
> > >>  > log(x) -log(1 - cos(x))/2
> > >>  > # 0.3465291
> > >>
> > >>  > x = 1E-8
> > >>  > log(x) -log(1 - cos(x))/2
> > >>  > # Inf
> > >>  > log(x) - log1p(- cos(x))/2
> > >>  > # Inf => fails as well!
> > >>  > # although using only log1p(cos(x)) seems to do the trick;
> > >>  > log1p(cos(x)); log(2)/2;
> > >>
> > >>  > 3.) 1/(1 - cos(x)) - 2/x^2
> > >>  > It is possible to convert the formula to one which is
> > >> numerically more
> > >>  > stable. It is also possible to compute it manually, but it
> > >> i