This reminds me of a situation in 1975 where a large computer service bureau had
contracted to migrate scientific software from a Univac 1108 to a an IBM System
360.
They spent 3 weeks trying to get the IBM to give the same eigenvectors on a
problem as the
Univac. There were at least 2 eigenvalue
This looks like what I call a sumscale problem i.e., some sort of simple
function of the parameters sums to a constant. I've done some work on
these, but don't have it with me just now. There are several approaches,
but they can be quite tricky. Will send some info in about a week or so
if you are
I recently gave a talk to the Ottawa PC Users Group about Sweave, knitR and
odfWeave. The
last is sometimes cranky, but I've found I can use it for word-processing
documents, and
if these are saved in odt format (open office), then odfWeave can process them
to
"finalized" odt form.
Recognize th
It appears you are using the approach "throw every method at a problem and
select the
answer you like". I use this quite a lot with optimx to see just what disasters
I can
create, but I do so to see if the software will return sensible error messages.
You will have to provide a reproducible exam
ALL", locale = "en_US.UTF-8")
> I suppose this also works under windows.
> Frans
>
> -Oorspronkelijk bericht-
> Van: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
> Namens John C Nash
> Verzonden: vrijdag 10 augustus 2012 15:16
>
On 12-08-10 08:59 AM, Duncan Murdoch wrote:
On 12-08-10 7:55 AM, John C Nash wrote:
I'm trying to see if I can help some Windows users, but in WinXP and Win7
virtual machines
and in a Win 7-64 real environment, after extracting the odfWeave examples.odt
from the
package and putting it in m
I'm trying to see if I can help some Windows users, but in WinXP and Win7
virtual machines
and in a Win 7-64 real environment, after extracting the odfWeave examples.odt
from the
package and putting it in my working directory ("My Documents") where R can see
and unpack
it, I get "unable to conve
I would ask contributors to this list to at least note that the sort of FUD
(Fear
Uncertainty and Doubt) of the type mentioned by the poster is hearsay. Do we
have any
contractual promises from SAS Inc. or other companies that they will guarantee
their
software? Have there been documented compen
Recently I looked into some ways to speed up a calculation in R (the Rayleigh
Quotient is
the example). I wanted to look at the byte-code compiler too. As a way of
making notes I
embedded my attempts in a knitR (.Rnw) file. The resulting pdf is linked from
the Rwiki at
http://rwiki.sciviews.org/
Duncan has given a indication of why nls() has troubles, and you have found a
way to work
around the problem partially. However, you may like to try nlmrt (from R-forge
project
R-forge.r-project.org/R/?group_id=395
It is intended to be very aggressive in finding a solution, and also to deal
wi
This looks like a homework trap set up to catch those trying to use facilities
like Rhelp.
f = exp(x^2-y+z^(-1))= exp(x^2) * exp(1/z)/exp(y)
To maximize clearly needs biggest x (37), smallest y (2) and a z that
makes exp(1/z) big -- 0. Except that you'll get Inf etc.
Actually, several of the op
By interpreting the code line by line and looking at the output of the lines, I
got the
following result. It looks like it needs the fifu converted to an expression,
then
evaluated. This suggests a workaround, but doesn't answer the underlying
question about
whether this is supposed to work this
I see at least 4 issues with trying this.
1) I am on record several times that CG was the LEAST SUCCESSFUL of the codes I
put in my
1979 book and which is CG in optim().
2) Rcgmin is better (I implemented this, but the method is Yuan/Dai), and there
may be
other CG codes that can squeeze a bit
While lm() is a linear modeling, the constraints make it easier to solve with a
nonlinear
tool. Both my packages Rvmmin and Rcgmin (I recommend the R-forge versions as
more
up-to-date) have bounds constraints and "masks" i.e., fixed parameters.
I am actually looking for example problems of this
Your issue is that nls returns "singular gradient", but I believe the real
trouble is that
if or ifelse are not in the derivatives table. My own (experimental, but in
R-forge)
package nlmrt has nlxb to mirror nls but be much more aggressive in finding a
solution,
and it gives an error msg that i
Your function will not evaluate as coded. i.e., llfn(start.par) doesn't "work",
as there
are unequal numbers of arguments. Also, while R allows you to use variables
that are not
explicitly defined for a function, I've almost always got into trouble if I
don't pass
them VERY carefully.
Finally,
When I run your problem in optimx (with all.methods=TRUE), L-BFGS-B fails
because the
function is evaluated out of range. optimx (actually the optfntools package)
from R-forge
can trap these, and it is usually a good idea to stop and figure out what is
going on.
Nevertheless, it seems a solution
There's no reason that the optimum cannot be at the bounds. Bounded problems
really do
sometimes have solutions on those bounds.
Compute the unconstrained gradient of your objective function at the bounds and
see if the
function is reduced when going across the bounds. The function here is assum
Thanks Gabor and Ista,
I should have realized I could tweak parse(), since I've been using it elsewhere. Somehow
focused only on documentation of source().
JN
On 04/28/2012 10:36 AM, Gabor Grothendieck wrote:
On Sat, Apr 28, 2012 at 10:27 AM, John C Nash wrote:
I've been creat
I've been creating some R tools that manipulate objective functions for optimization. In
so doing, I create a character string with R code, and then want to have it in my
workspace. Currently -- and this works fine -- I write the code out, then use source() to
bring it in again. Example:
cstr<
As Bert Gunter points out, the statistic R-squared should not be given any
grand meaning.
However, (also in the archives) I use it in my nonlinear least squares codes as
a "sanity
check" -- and not more than that. Since it is computed as
{1 - (residual SS)/(SS from mean)},
do you really want t
This thread reveals that R has some holes in the solution of some of the linear algebra
problems that may arise. It looks like Jim Ramsay used a quick and dirty approach to the
generalized eigenproblem by using B^(-1) %*% A, which is usually not too successful due to
issues with condition of B a
Peter Dalgaard has already given some indications. However, nls() is pretty
fragile as a
solver in my experience. I'm in the process of putting some more robust (in the
computing
and not statistical sense) solvers in the nlmrt package on the R-forge project
at
https://r-forge.r-project.org/R/?g
Your function is giving NaN's during the optimization.
The R-forge version of optimx() has functionality specifically intended to deal
with this.
NOTE: the CRAN version does not, and the R-forge version still has some
glitches!
However, I easily ran the code you supplied by changing optim to op
nls() often gives this message, which is misleading in that it is the Jacobian
that is not
of full rank in the solution of J * delta ~ - residuals or in more
conventional
Gauss-Newton J' * J delta = -g = - J' * residuals. My view is that the
gradient itself
cannot be "singular". It's just
Thanks to Hadley, William and Duncan for suggestions. I'm currently
implementing a
solution that is close to that of William and Duncan (and learning more about
environments
in the process). I suspect the reference classes are possibly a more reliable
long term
solution. I'll plead laziness unt
In trying to streamline various optimization functions, I would like to have a scratch pad
of working data that is shared across a number of functions. These can be called from
different levels within some wrapper functions for maximum likelihood and other such
computations. I'm sure there are o
While not having a perfect solution, I have made enough progress to be able to declare
"good enough", thanks in particular to Duncan Murdoch and Yihui Xie.
First, the font can be made smaller so output fits on a line and does not overflow the
margins. This is accomplished by putting the command
h = 50)
> rep(1, 100)
>
> options(width = 90)
> rep(1, 100)
>
>
> Regards,
> Yihui
> --
> Yihui Xie
> Phone: 515-294-2465 Web: http://yihui.name
> Department of Statistics, Iowa State University
> 2215 Snedecor Hall, Ames, IA
>
>
>
> On Sun, Ma
The following example gives output with a line length of 103 on my system. It
is causing a
nuisance in creating a vignette. Is there something other than e.g.,
options(width=60) I
need to set? The Sweave FAQ suggests this should work.
options(width=60)
pastured <- data.frame(
time=c(9, 14, 21, 2
Once again, R has been accepted as an organization for the Google Summer of
Code (2012).
We invite students interested in this program to learn more about it. A good
starting
point is http://rwiki.sciviews.org/doku.php?id=developers:projects:gsoc2012.
The Google
GSOC home page is http://www.goog
I am building some tools to carry out nonlinear least squares when the
residuals are very
small (something that nls specifically is NOT designed to do). However, I
wanted to put a
cross reference in the Rd file. Running R CMD check pkg gave a strange warning
Obsolete package(s) 'nls' in Rd xre
When I was still teaching undergraduate intro biz-stat (among that community it
is always
abbreviated), we needed to control the spreadsheet behaviour of TAs who entered
marks into
a spreadsheet. We came up with TellTable (the Sourceforge site is still around
with refs
at http://telltable-s.sour
Gotta love R.
Thanks to Bill Dunlap, Peter Langfelder and Jim Holtman for no less than 3 different
solutions.
JN
On 12-03-01 04:25 PM, Peter Langfelder wrote:
pstr<-c("b1=200", "b2=50", "b3=0.3")
split = sapply(strsplit(pstr, split = "="), I);
pnum = as.numeric(split[2, ]);
names(pnum)
Not paying close attention to detail, I entered the equivalent of
pstr<-c("b1=200", "b2=50", "b3=0.3")
when what I wanted was
pnum<-c(b1=200, b2=50, b3=0.3)
There was a list thread in 2010 that shows how to deal with un-named vectors,
but the same
lapply solution doesn't seem to work here i.e.
Ben Bolker pointed out in a response about max. likelihood estimation that
parameter
scaling is not available in nlminb
On 02/03/2012 06:00 AM, r-help-requ...@r-project.org wrote:
> * if you were using one of the optimizing methods from optim() (rather
> than nlminb), e.g. L-BFGS-B, I would sugg
Peter and Bert have already made some pertinent remarks. This comment is a bit
tangential,
but in the same flavour. As they note, it is "goodness of fit relative to
what?" that is
important.
As a matter of course when doing nonlinear least squares, I generally compute
the quantity
[1 - resid
As I'm working with some folk who use Windows to prepare an article /
documentation, I'd
like to be able to know if we can use odfWeave. It seems there is no "official"
binary at
the moment for Windows. Does anyone have a working unofficial one, if possible
both win32
and win64 flavours? My skil
A particularly unfortunate aspect of "SANN" in optim() is that it will evaluate
the
objective function 'maxit' times and quit with conv=0, implying it has
"converged".
The Rd file points out a number of departures of SANN from the other
optimizers, but a key
one is that it does NOT return a res
optimx does allow you to use bounds. The default is using only methods from
optim(), but
even though I had a large hand in those methods, and they work quite well,
there are other
tools available within optimx that should be more appropriate for your problem.
For example, the current version of
The multiple exponential problem you are attempting has a well-known and long
history.
Lanczos 1956 book showed that changing the 4th decimal in a data set changes the
parameters hugely.
Nevertheless, if you just need a "fit" and not reliable paramters, you could
reparameterize to k1 and k2diff=
I think you need to make an expression. I tried
> nls.fn <- asym/((1+exp((xmid-x.seq)/scal)))
Error: object 'asym' not found
> nls.fn <- expression(asym/((1+exp((xmid-x.seq)/scal
> D(nls.fn,"asym")
1/((1 + exp((xmid - x.seq)/scal)))
>
Does that help? Maybe there are other approaches too.
JN
This kind of error seems to surprise R users. It surprises me that it doesn't
happen much
more frequently. The "BFGS" method of optim() from the 1990 Pascal version of
my book was
called the Variable Metric method as per Fletcher's 1970 paper it was drawn
from. It
really works much better with
; optimize again for those starting points. What disappoints me is that
> even when I found a decent solution (the minimized value of 336) it
> was still worse than the Solver solution!
>
> And I am trying to prove to everyone here that we should do R, not Excel :-)
>
> Thanks again f
I won't requote all the other msgs, but the latest (and possibly a bit glitchy)
version of
optimx on R-forge
1) finds that some methods wander into domains where the user function fails
try() (new
optimx runs try() around all function calls). This includes L-BFGS-B
2) reports that the scaling i
You've identified a problem with the ismev package and it really is the package
maintainers who are in the best position to fix it. As noted, box constraints
(lower and
upper) go OUTSIDE the control=list(). (Only L-BFGS-B has bounds in optim().)
This is also the case for many routines called by t
The error is what it says: Your program has asked for an approximation to a
derivative
(you, or whoever wrote the package you are using, didn't provide an analytic
gradient, so
it's using an approximation), and the result was returned Inf.
This is VERY common -- the BFGS option of optim() is a
Could the problem be that nlm is for Minimization? In fact in the first line of
the manual.
JN
On 10/19/2011 06:00 AM, r-help-requ...@r-project.org wrote:
> Message: 62
> Date: Tue, 18 Oct 2011 11:12:09 -0700 (PDT)
> From: aazaff
> To: r-help@r-project.org
> Subject: [R] Non-linear maximization
BFGS is actually Fletcher's (1970) variable metric code that I modified with
him in
January 1976 in Dundee and then modified very, very slightly (to insist that
termination
only occurred on a steepest descent search -- this messes up the approximated
inverse
Hessian, but I have an experimental R
optim()
optimx package
minpackLM package
and several others
JN
On 09/15/2011 06:00 AM, r-help-requ...@r-project.org wrote:
> Message: 77
> Date: Wed, 14 Sep 2011 20:44:16 +0100
> From: Liam Brown
> To: r-help@r-project.org
> Subject: [R] Nonlinear Regression
> Message-ID:
>
> Content-Typ
The error msg is telling you that R cannot evaluate the loss function, so you
should not
expect answers.
You might try examining the data -- Are there NA or Inf entries?
Or prepare a dataframe with just X and Y, sort by X and graph.
Then check the nls computations by sampling, say, every 100 X'
I've started to work on this again, and can confirm there seems to be some sort
of bug in
the gradient test at the beginning of the current R-forge version of optimx. It
is not
something obvious, and looks like a mixup in arguments to functions, which have
been an
issue since I've been trying to
Unless you are supplying analytic hessian code, you are almost certainly
getting an
approximation. Worse, if you do not provide gradients, these are the result of
two levels
of differencing, so you should expect some loss of precision in the approximate
Hessian.
Moreover, if your estimate of th
optimx uses exactly the same code as optim for BFGS. However, the call to optim
in optimx
is preceded by a check of the gradient at the starting values supplied using
numDeriv.
That is, we evaluate the gradient with gr=(user's function for gradient) and
then with
the grad() function from numDer
optimx with BFGS uses optim, so you actually incur some overhead unnecessarily.
And BFGS
really needs good gradients (as does Rvmmin and Rcgmin which are updated BFGS
and CG, but
all in R and with bounds or box constraints).
>From the Hessian, your function is (one of the many!) that have pretty
As I'm at least partly responsible for CG in optim, and packager of Rcgmin,
I'll recommend
the latter based on experience since it was introduced. I've so far seen no
example where
CG does better than Rcgmin, though I'm sure there are cases to be found.
However, Ben is right that if ADMB does s
2 comments below.
On 07/07/2011 06:00 AM, r-help-requ...@r-project.org wrote:
> Date: Wed, 6 Jul 2011 20:39:19 -0700 (PDT)
> From: EdBo
> To: r-help@r-project.org
> Subject: Re: [R] loop in optim
> Message-ID: <1310009959045-3650592.p...@n4.nabble.com>
> Content-Type: text/plain; charset=us-ascii
Interesting!
I get nice convergence in both 32 and 64 bit systems on 2.13.0. I agree the
older versions
are a bit of a distraction. The inconsistent behaviour on current R is a
concern.
Maybe Philip, Uwe, and I (and others who might be interested) should take this
off line
and see what is going
The error msg puts it quite clearly -- the initial parameters 1,1,1,1 are
inadmissible for
your function. You need to have valid initial parameters for the variable
metric method
(option BFGS).
This is one of the main problems users have with any optimization method. It is
ALWAYS a
good idea to
On Ubuntu 10.04 it ran fine, albeit in a machine with lots of memory, it seems
to work
fine. Here's the output:
> rm(list=ls())
> gc()
used (Mb) gc trigger (Mb) max used (Mb)
Ncells 131881 7.1 35 18.7 35 18.7
Vcells 128838 1.0 786432 6.0 559631 4.3
> p <- 500
> n <
Is this a homework problem in finding the largest eigensolution of W?
If not, I'd be trying to maximize (D' W D)/ (D' D)
using (n-1) values of D and setting one value to 1 -- hopefully a value that is
not going
to be zero.
JN
>
> Date: Wed, 11 May 2011 17:28:54 -0300
> From: Leonardo Monaste
It's likely that the loss function has a log() or 1/x and the finite difference
approximations to gradients have added / subtracted a small number and caused a
singularity. Unfortunately, you'll need to dig into the fGarch code or write
your own
(ouch!). Or perhaps the fGarch package maintainer wi
Last Friday we learned that R is accepted again for the Google Summer of Code.
R's "ideas" are at
http://rwiki.sciviews.org/doku.php?id=developers:projects:gsoc2011
On that page is a link to our google groups list for mentors and prospective
students.
See http://www.google-melange.com/ for the
--
>> Ravi Varadhan, Ph.D.
>> Assistant Professor,
>> Division of Geriatric Medicine and Gerontology School of Medicine Johns
>> Hopkins University
>>
>> Ph. (410) 502-2619
>> email: rvarad...@jhmi.edu
>>
>> -Original Messag
For functions that have a reasonable structure i.e., 1 or at most a few optima,
it is
certainly a sensible task. Separable functions are certainly nicer (10K 1D
minimizations),
but it is pretty easy to devise functions e.g., generalizations of Rosenbrock,
Chebyquad
and other functions that are h
There are considerable differences between the algorithms. And BFGS is an
unfortunate
nomenclature, since there are so many variants that are VERY different. It was
called
"variable metric" in my book from which the code was derived, and that code was
from Roger
Fletcher's Fortran VM code based
Using R2.12.1 on Ubuntu 10.04.1 I've tried to run the following code chunk in
odfWeave
<>=
x<-seq(1:100)/10
y<-sin(cos(x/pi))
imageDefs <- getImageDefs()
imageDefs$dispWidth <- 4.5
imageDefs$dispHeight<- 4.5
setImageDefs(imageDefs)
X11(type="cairo")
plot(x,y)
title(main="sin(cos(x/pi))")
savePlot
In a little over a month (Mar 28), students will have just over a week (until
April 8) to
apply to work on Google Summer of Code projects. In the past few years, R has
had several
such projects funded. Developers and mentors are currently preparing project
outlines on
the R wiki at http://rwiki.
Kamel,
You have already had several comments suggesting some ideas for improvement,
namely,
1) correct name for iteration limit (Karl Ove Hufthammer)
2) concern about number of parameters and also possibilities of multiple minima
(Doug Bates)
3) use of optimx to allow several optimizers to be t
For those issues with optimization methods (optim, optimx, and others) I see, a
good
percentage are because the objective function (or gradient if user-supplied) is
mis-coded.
However, an almost equal number are due to functions getting into overflow or
underflow
territory and yielding quantitie
I spent more time than I should have debugging a script because I wanted
x<-seq(0,100)*0.1
but typed
x<-seq(O:100)*0.1
seq(0:100) yields 1 to 101,
Clearly my own brain to fingers fumble, but possibly one others may want to
avoid it.
JN
__
R-hel
Ravi Varadhan and I have been looking at UCMINF to try to identify why it gives
occasional
(but not reproducible) errors, seemingly on Windows only. There is some
suspicion that its
use of DBLEPR for finessing the Fortran WRITE() statements may be to blame.
While I can
find DBLEPR in Venables an
In the sort of problem mentioned below, the suggestion to put in gradients (I
believe this
is what is meant by "minus score vector") is very important. Using analytic
gradients is
almost always a good idea in optimization of smooth functions for both
efficiency of
computation and quality of resu
Ravi has already responded about the possibility of using nls(). He and I also have put up
the optimx package which allows a control 'maximize=TRUE' because of the awkwardness of
using fnscale in optim. (optimx still lets you use optim()'s tools too, but wrapped with
this facility.) There are a
Dirk E. has properly focussed the discussion on measurement rather than opinion. I'll add
the issue of the human time taken to convert, and more importantly debug, interfaced code.
That too could be measured, but we rarely see human hours to code/debug/test reported.
Moreover, I'll mention the
Dear R colleagues,
I'm looking for some examples or vignettes or similar to suggest good ways to convert an
expression to a function. In particular, I'd like to (semi-) automate the conversion of
nls() expressions to residual functions. Example
Given variables y, T and parameters b1, b2, b3 i
Sometimes it is easier to just write it. See below.
On 10-07-30 06:00 AM, r-help-requ...@r-project.org wrote:
Date: Thu, 29 Jul 2010 11:15:05 -0700 (PDT)
From: sammyny
To:r-help@r-project.org
Subject: Re: [R] newton.method
Message-ID:<1280427305687-2306895.p...@n4.nabble.com>
Content-Type: text/
I won't add to the quite long discussion about the vagaries of nlminb, but will note that
over a long period of software work in this optimization area I've found a number of
programs and packages that do strange things when the objective is a function of a single
parameter. Some methods quite e
Without the data and objective function, it is fairly difficult to tell what is
going on.
However, we can note:
- no method is specified, but it would have to be L-BFGS-B as this is
the only
one that can handle box constraints
- the fourth parameter is at a different scale, an
I am trying to estimate an Arrhenius-exponential model in R. I have one
vector of data containing failure times, and another containing
corresponding temperatures. I am trying to optimize a maximum likelihood
function given BOTH these vectors. However, the optim command takes only
one such vect
I have not included the previous postings because they came out very strangely on my mail
reader. However, the question concerned the choice of minimizer for the zeroinfl()
function, which apparently allows any of the current 6 methods of optim() for this
purpose. The original poster wanted to u
I think that Marc S. has provided some of the better refs. for R and its usage
in
commercial and organizational settings.
However, the kinds of bloody-minded "We're going to insist you use
inappropriate software
because we are idiots" messages are not news to many of us. Duncan points out
that
I'd echo Paul's sentiments that we need to know where and when and how R goes
slowly. I've
been working on optimization in many environments for many years (try ?optim
and you'll
find me mentioned!). So has Dave Fournier. AD Model Builder has some real
strengths and we
need Automatic Differentia
now
outcome, that
would be helpful. Since .Platform allows me to determine OS type, I should be
able to work
out a more or less platform-independent function.
JN
biostatmatt wrote:
> On Thu, 2010-05-27 at 19:08 -0400, Prof. John C Nash wrote:
>> I would like to have a function that wou
I would like to have a function that would wait either until a specified
timeout (in
seconds preferably) or until a key is pressed. I've found such a function quite
useful in
other programming environments in setting up dialogs with users or displaying
results,
rather like a timed slideshow that
s already dealt with this type of issue, I'd be happy to know. For example,
if there is a forceLoad() somewhere, it would save the effort above and could be useful
for developers to ensure they are using the right version of a package.
JN
From: Prof. John C Nash
Date: Thu, 15 Apr 2
I've been working on a fairly complex package that is a wrapper for several optimization
routines. In this work, I've attempted to do the following:
- edit the package code foo.R
- in a root terminal at the right directory location
R CMD REMOVE foo
R CMD INSTALL foo
However, I
I have a problem. I am using the NLME library to fit a non-linear model. There is a linear component to the model that has a couple parameter values that can only be positive (the coefficients are embedded in a sqrt). When I try and fit the model to data the search algorithm tries to see if a n
If you have a perfect fit, you have zero residuals. But in the nls manual page
we have:
Warning:
*Do not use ‘nls’ on artificial "zero-residual" data.*
So this is a case of complaining that your diesel car is broken because you ignored the
"Diesel fuel only" sign on the filler cap and
Though I've been on the list for quite a while, I've not yet figured out what the "Odp:"
entry in subject lines means. Is it important information?
JN
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the
I have been talking with Gabriel Wainer about the possibility of some community building
with R and simulation workers. The conference announced below is just before UseR (but in
that capital to the north of Washington).
If anyone on this list is participating or thinking of participating, perh
Besides the suggestions made by others, you may want to look at the R-wiki, where there is
a section on installing rattle and its dependencies, but mostly for Linux distros. Rattle
involves lots of other tools, which makes it challenging to install. You could do the
community a service by addin
If the data is fairly small, send it and the objective function to me off-list and I'll
give it a quick try.
However, this looks very much like the kind of distance-constrained type of problem like
the "largest small polygon" i.e., what is the maximum area hexagon where no vertex is more
than
optim really isn't intended for [1D] functions. And if you have a constrained search area,
it pays to use it. The result you are getting is like the second root of a quadratic that
you are not interested in.
You may want to be rather careful about the problem to make sure you have the function
There is also the OptimizeR project on R-forge
http://r-forge.r-project.org/R/?group_id=395. Other related projects are there also, but
I'll let their authors speak for themselves. Stefan Theussl did a good job in the Task
View, but it is from last June, and it would be a monumental effort to ke
Has anyone else experienced frustration with the LinkedIn site. It's been spamming me
since I tried to join, and I seem to see R postings, but I've never been able to "confirm"
my email address i.e., it never sent me THAT email. So I can never get past the login
screen to see more than the subje
Is this a transient problem, or has the link to the R wiki on the R home page
(www.r-project.org) to http://wiki.r-project.org/ been corrupted? I can find
http://rwiki.sciviews.org that works.
JN
__
R-help@r-project.org mailing list
https://stat.ethz.
Possibly of limited use to the original poster, but of interest more generally, there are
a number of tools developed in the 70s for updating the matrix decompositions. There's at
least one in my 1979 book "Compact numerical methods for computers" (still in print
apparently) -- we didn't have en
Doing a hessian estimate at each Nelder-Mead iteration is rather like going
from den Haag
to Delft as a pedestrian walking and swimming via San Francisco. The structure of the
algorithm means the Hessian estimate is done in addition to the NM work.
While my NM code was used for optim(), I did
Today I wanted to update one of my r-forge packages (optimx) and when I ran the
usual
R CMD check optimx
I got all sorts of odd errors with Latex (and a very slow R CMD check). I thought I had
managed to damage the Rd file, which indeed I had changed. Since I recently started using
Ubuntu 9.0
1 - 100 of 140 matches
Mail list logo