Hello,

I am using CentOS Linux 6.0, R 2.14.1 and nlme 3.1-103

I am trying to fit some models using nlme, and what was happening was that it 
would get to some datasets and just "stall", the memory usage would grow and 
grow and eventually crash.

I looked carefully into one of the datasets and ran it separately. It worked.  
After narrowing down on all possibilities, I found that the main difference 
between both was in the initial values. If one subtracts them:

> initialValuesOriginal - initialValuesThatWorks
          A.R           A.a           A.y           B.R           B.a
-3.194922e-08  0.000000e+00  0.000000e+00 -9.249630e-08  0.000000e+00
          B.y           C.R           C.a           C.y           D.R
 0.000000e+00 -1.713935e-06 -8.639821e-09  1.032083e-09  3.716880e-08
          D.a           D.y           E.R           E.a           E.y
 0.000000e+00  0.000000e+00  1.766460e-06  0.000000e+00  0.000000e+00

How come such a small difference can have such a big effect?  (working ok, 
versus stalling the entire computer)

I also just played with rounding to 10 decimal digits and it worked fine.

I will look into what nlme is doing tomorrow, but was wondering if perhaps 
there is something that I don't know about how R works, or using numbers with 
many decimals in nlme for initial values, that would create such a phenomenon.

Thanks in advance,
Ramiro

        [[alternative HTML version deleted]]

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to