Interesting!
I get nice convergence in both 32 and 64 bit systems on 2.13.0. I agree the
older versions
are a bit of a distraction. The inconsistent behaviour on current R is a
concern.
Maybe Philip, Uwe, and I (and others who might be interested) should take this
off line
and see what is going
The error msg puts it quite clearly -- the initial parameters 1,1,1,1 are
inadmissible for
your function. You need to have valid initial parameters for the variable
metric method
(option BFGS).
This is one of the main problems users have with any optimization method. It is
ALWAYS a
good idea to
On Ubuntu 10.04 it ran fine, albeit in a machine with lots of memory, it seems
to work
fine. Here's the output:
> rm(list=ls())
> gc()
used (Mb) gc trigger (Mb) max used (Mb)
Ncells 131881 7.1 35 18.7 35 18.7
Vcells 128838 1.0 786432 6.0 559631 4.3
> p <- 500
> n <
Is this a homework problem in finding the largest eigensolution of W?
If not, I'd be trying to maximize (D' W D)/ (D' D)
using (n-1) values of D and setting one value to 1 -- hopefully a value that is
not going
to be zero.
JN
>
> Date: Wed, 11 May 2011 17:28:54 -0300
> From: Leonardo Monaste
It's likely that the loss function has a log() or 1/x and the finite difference
approximations to gradients have added / subtracted a small number and caused a
singularity. Unfortunately, you'll need to dig into the fGarch code or write
your own
(ouch!). Or perhaps the fGarch package maintainer wi
Last Friday we learned that R is accepted again for the Google Summer of Code.
R's "ideas" are at
http://rwiki.sciviews.org/doku.php?id=developers:projects:gsoc2011
On that page is a link to our google groups list for mentors and prospective
students.
See http://www.google-melange.com/ for the
--
>> Ravi Varadhan, Ph.D.
>> Assistant Professor,
>> Division of Geriatric Medicine and Gerontology School of Medicine Johns
>> Hopkins University
>>
>> Ph. (410) 502-2619
>> email: rvarad...@jhmi.edu
>>
>> -Original Messag
For functions that have a reasonable structure i.e., 1 or at most a few optima,
it is
certainly a sensible task. Separable functions are certainly nicer (10K 1D
minimizations),
but it is pretty easy to devise functions e.g., generalizations of Rosenbrock,
Chebyquad
and other functions that are h
There are considerable differences between the algorithms. And BFGS is an
unfortunate
nomenclature, since there are so many variants that are VERY different. It was
called
"variable metric" in my book from which the code was derived, and that code was
from Roger
Fletcher's Fortran VM code based
Using R2.12.1 on Ubuntu 10.04.1 I've tried to run the following code chunk in
odfWeave
<>=
x<-seq(1:100)/10
y<-sin(cos(x/pi))
imageDefs <- getImageDefs()
imageDefs$dispWidth <- 4.5
imageDefs$dispHeight<- 4.5
setImageDefs(imageDefs)
X11(type="cairo")
plot(x,y)
title(main="sin(cos(x/pi))")
savePlot
In a little over a month (Mar 28), students will have just over a week (until
April 8) to
apply to work on Google Summer of Code projects. In the past few years, R has
had several
such projects funded. Developers and mentors are currently preparing project
outlines on
the R wiki at http://rwiki.
Kamel,
You have already had several comments suggesting some ideas for improvement,
namely,
1) correct name for iteration limit (Karl Ove Hufthammer)
2) concern about number of parameters and also possibilities of multiple minima
(Doug Bates)
3) use of optimx to allow several optimizers to be t
For those issues with optimization methods (optim, optimx, and others) I see, a
good
percentage are because the objective function (or gradient if user-supplied) is
mis-coded.
However, an almost equal number are due to functions getting into overflow or
underflow
territory and yielding quantitie
I spent more time than I should have debugging a script because I wanted
x<-seq(0,100)*0.1
but typed
x<-seq(O:100)*0.1
seq(0:100) yields 1 to 101,
Clearly my own brain to fingers fumble, but possibly one others may want to
avoid it.
JN
__
R-hel
Ravi Varadhan and I have been looking at UCMINF to try to identify why it gives
occasional
(but not reproducible) errors, seemingly on Windows only. There is some
suspicion that its
use of DBLEPR for finessing the Fortran WRITE() statements may be to blame.
While I can
find DBLEPR in Venables an
In the sort of problem mentioned below, the suggestion to put in gradients (I
believe this
is what is meant by "minus score vector") is very important. Using analytic
gradients is
almost always a good idea in optimization of smooth functions for both
efficiency of
computation and quality of resu
Ravi has already responded about the possibility of using nls(). He and I also have put up
the optimx package which allows a control 'maximize=TRUE' because of the awkwardness of
using fnscale in optim. (optimx still lets you use optim()'s tools too, but wrapped with
this facility.) There are a
Dirk E. has properly focussed the discussion on measurement rather than opinion. I'll add
the issue of the human time taken to convert, and more importantly debug, interfaced code.
That too could be measured, but we rarely see human hours to code/debug/test reported.
Moreover, I'll mention the
Dear R colleagues,
I'm looking for some examples or vignettes or similar to suggest good ways to convert an
expression to a function. In particular, I'd like to (semi-) automate the conversion of
nls() expressions to residual functions. Example
Given variables y, T and parameters b1, b2, b3 i
Sometimes it is easier to just write it. See below.
On 10-07-30 06:00 AM, r-help-requ...@r-project.org wrote:
Date: Thu, 29 Jul 2010 11:15:05 -0700 (PDT)
From: sammyny
To:r-help@r-project.org
Subject: Re: [R] newton.method
Message-ID:<1280427305687-2306895.p...@n4.nabble.com>
Content-Type: text/
I won't add to the quite long discussion about the vagaries of nlminb, but will note that
over a long period of software work in this optimization area I've found a number of
programs and packages that do strange things when the objective is a function of a single
parameter. Some methods quite e
Without the data and objective function, it is fairly difficult to tell what is
going on.
However, we can note:
- no method is specified, but it would have to be L-BFGS-B as this is
the only
one that can handle box constraints
- the fourth parameter is at a different scale, an
I am trying to estimate an Arrhenius-exponential model in R. I have one
vector of data containing failure times, and another containing
corresponding temperatures. I am trying to optimize a maximum likelihood
function given BOTH these vectors. However, the optim command takes only
one such vect
I have not included the previous postings because they came out very strangely on my mail
reader. However, the question concerned the choice of minimizer for the zeroinfl()
function, which apparently allows any of the current 6 methods of optim() for this
purpose. The original poster wanted to u
I think that Marc S. has provided some of the better refs. for R and its usage
in
commercial and organizational settings.
However, the kinds of bloody-minded "We're going to insist you use
inappropriate software
because we are idiots" messages are not news to many of us. Duncan points out
that
I'd echo Paul's sentiments that we need to know where and when and how R goes
slowly. I've
been working on optimization in many environments for many years (try ?optim
and you'll
find me mentioned!). So has Dave Fournier. AD Model Builder has some real
strengths and we
need Automatic Differentia
now
outcome, that
would be helpful. Since .Platform allows me to determine OS type, I should be
able to work
out a more or less platform-independent function.
JN
biostatmatt wrote:
> On Thu, 2010-05-27 at 19:08 -0400, Prof. John C Nash wrote:
>> I would like to have a function that wou
I would like to have a function that would wait either until a specified
timeout (in
seconds preferably) or until a key is pressed. I've found such a function quite
useful in
other programming environments in setting up dialogs with users or displaying
results,
rather like a timed slideshow that
s already dealt with this type of issue, I'd be happy to know. For example,
if there is a forceLoad() somewhere, it would save the effort above and could be useful
for developers to ensure they are using the right version of a package.
JN
From: Prof. John C Nash
Date: Thu, 15 Apr 2
I've been working on a fairly complex package that is a wrapper for several optimization
routines. In this work, I've attempted to do the following:
- edit the package code foo.R
- in a root terminal at the right directory location
R CMD REMOVE foo
R CMD INSTALL foo
However, I
I have a problem. I am using the NLME library to fit a non-linear model. There is a linear component to the model that has a couple parameter values that can only be positive (the coefficients are embedded in a sqrt). When I try and fit the model to data the search algorithm tries to see if a n
If you have a perfect fit, you have zero residuals. But in the nls manual page
we have:
Warning:
*Do not use ‘nls’ on artificial "zero-residual" data.*
So this is a case of complaining that your diesel car is broken because you ignored the
"Diesel fuel only" sign on the filler cap and
Though I've been on the list for quite a while, I've not yet figured out what the "Odp:"
entry in subject lines means. Is it important information?
JN
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the
I have been talking with Gabriel Wainer about the possibility of some community building
with R and simulation workers. The conference announced below is just before UseR (but in
that capital to the north of Washington).
If anyone on this list is participating or thinking of participating, perh
Besides the suggestions made by others, you may want to look at the R-wiki, where there is
a section on installing rattle and its dependencies, but mostly for Linux distros. Rattle
involves lots of other tools, which makes it challenging to install. You could do the
community a service by addin
If the data is fairly small, send it and the objective function to me off-list and I'll
give it a quick try.
However, this looks very much like the kind of distance-constrained type of problem like
the "largest small polygon" i.e., what is the maximum area hexagon where no vertex is more
than
optim really isn't intended for [1D] functions. And if you have a constrained search area,
it pays to use it. The result you are getting is like the second root of a quadratic that
you are not interested in.
You may want to be rather careful about the problem to make sure you have the function
There is also the OptimizeR project on R-forge
http://r-forge.r-project.org/R/?group_id=395. Other related projects are there also, but
I'll let their authors speak for themselves. Stefan Theussl did a good job in the Task
View, but it is from last June, and it would be a monumental effort to ke
Has anyone else experienced frustration with the LinkedIn site. It's been spamming me
since I tried to join, and I seem to see R postings, but I've never been able to "confirm"
my email address i.e., it never sent me THAT email. So I can never get past the login
screen to see more than the subje
Is this a transient problem, or has the link to the R wiki on the R home page
(www.r-project.org) to http://wiki.r-project.org/ been corrupted? I can find
http://rwiki.sciviews.org that works.
JN
__
R-help@r-project.org mailing list
https://stat.ethz.
Possibly of limited use to the original poster, but of interest more generally, there are
a number of tools developed in the 70s for updating the matrix decompositions. There's at
least one in my 1979 book "Compact numerical methods for computers" (still in print
apparently) -- we didn't have en
Doing a hessian estimate at each Nelder-Mead iteration is rather like going
from den Haag
to Delft as a pedestrian walking and swimming via San Francisco. The structure of the
algorithm means the Hessian estimate is done in addition to the NM work.
While my NM code was used for optim(), I did
Today I wanted to update one of my r-forge packages (optimx) and when I ran the
usual
R CMD check optimx
I got all sorts of odd errors with Latex (and a very slow R CMD check). I thought I had
managed to damage the Rd file, which indeed I had changed. Since I recently started using
Ubuntu 9.0
Without the data / script, I'm guessing that it is likely an attempt to evaluate the loss
function at an inadmissible point e.g., at the constraint where there is a log(0).
Different optimization tools handle things differently, and there are a couple of us
working (very slowly due to other th
As per ?optim
Usage:
optim(par, fn, gr = NULL, ...,
method = c("Nelder-Mead", "BFGS", "CG", "L-BFGS-B", "SANN"),
lower = -Inf, upper = Inf,
control = list(), hessian = FALSE)
Note that the optimx() function on R-forge
http://r-forge.r-project.org/R/?grou
the default itermax=200, as in:
>
> ss <- DEoptim(fun, lower=rep(0,3), upper=c(10e7, 10, 10),
> control=list(trace=FALSE, itermax=1000))
>
> On Wed, 2 Dec 2009, Prof. John C Nash wrote:
>
>> Kate Mullen showed one approach to this problem by using DEOptim to get
>
Kate Mullen showed one approach to this problem by using DEOptim to get
some good starting values.
However, I believe that the real issue is scaling (Warning: well-ridden
hobby-horse!).
With appropriate scaling, as below, nls does fine. This isn't saying nls
is perfect -- I've been trying to figu
As Ben Bolker has indicated, I am working on various improvements to the
functionality of optim() along with others, esp. Ravi Varadhan and Kate
Mullen.
With relevance to the posts by Sebastien Bihorel and Ben Bolker about
output of point/function value information on each evaluation, I am
wo
What do you mean by the "estimates were very bad"? In nearly 40 years of
working with optimization, I've seen badly set-up functions cause
troubles, I've seen multiple minima situations, I've seen comparisons of
results from one data set to the estimates for another, and I've seen
optimization pro
Most methods for optimization in R seek local minima, though there are
some -- mainly for >1 dimensions, that attempt to find the global
minimum stochastically (optim method SANN, DEoptim). SANN does, I
believe, handle 1 dimension, but does not have a convergence test, but
runs a fixed number of fu
In order to instrument functions for optimization or DE solving etc., I
want to set up a private ensemble of working data somewhere in my
workspace that won't collide with the problem-data. I can do this
already with named objects within .GlobalEnv, but I'd like to have
functions like
fnmon.setup<
> Date: Fri, 30 Oct 2009 09:29:06 +0100
> From: Christophe Dutang
> Subject: Re: [R] [R-SIG-Finance] Fast optimizer
> To: R_help Help
> Cc: r-help@r-project.org
>> > Ok. I have the following likelihood function.
>> >
>> > L <- p*dpois(x,a)*dpois(y,b+c)+(1-p)*dpois(x,a+c)*dpois(y,b)
>> >
>> > whe
While developing updates to optimization tools for R (I was behind 3 of
the 5 codes in optim(), but Brian Ripley did the implementation), I've
been seeing this kind of error when the objective function cannot be
computed so returns NA. Examples: attempts to divide by 0 or sqrt(-ve)
or log(0). A
Is the following helpful?
pdd<-deriv(~a+(b-a)/(1+exp((c-t)/d)),"d")
> pdd
expression({
.expr1 <- b - a
.expr2 <- c - t
.expr4 <- exp(.expr2/d)
.expr5 <- 1 + .expr4
.value <- a + .expr1/.expr5
.grad <- array(0, c(length(.value), 1L), list(NULL, c("d")))
.grad[, "d"] <-
As a first try, use a bounds constrained method (L-BFGS-B or one from
the r-forge Optimizer project
http://r-forge.r-project.org/R/?group_id=395) and then add a penalty or
barrier function to your objective function to take care of the
x1+x2 < 1 (the other end is implicit in the lower bounds o
Apologies if this shows up a second time with uninformative header
(apparently it got filtered, but ...), as I forgot to replace the
subject line.
As a first try, use a bounds constrained method (L-BFGS-B or one from
the r-forge Optimizer project
http://r-forge.r-project.org/R/?group_id=395)
c, value = TRUE)
On Mon, Sep 14, 2009 at 4:25 PM, Prof. John C Nash <mailto:nas...@uottawa.ca>> wrote:
If I run
cvec<-c("test.f", "test.sf", "try.g","try.res", "try.f")
print(cvec)
indx<-grep('\.f
If I run
cvec<-c("test.f", "test.sf", "try.g","try.res", "try.f")
print(cvec)
indx<-grep('\.f',cvec,perl=TRUE)
fset<-cvec[indx]
print(fset)
I get
> cvec<-c("test.f", "test.sf", "try.g","try.res", "try.f")
> print(cvec)
[1] "test.f" "test.sf" "try.g" "try.res" "try.f"
> indx<-grep("\.f",cvec
The essential issue is that the matrix you need to manipulate is very
large. This is not a new problem, and about a year ago I exchanged ideas
with the Rff package developers (things have been on the back burner
since due to recession woes and illness issues). These ideas were based
on some ver
There are also several projects on r-forge.r-project.org that deal with
optimization, including one I'm involved with that has been putting up a
number of new or reworked methods so that they can be tested, evaluated
and improved. These all deal with unconstrained or box-constrained
minimizatio
Better ideas should prevail. There is now a wiki page at
http://wiki.r-project.org/rwiki/doku.php?id=rugs:r_user_groups.
It is not yet fully populated. (David Smith's blog at REvolution
Computing mentions more groups.)
JN
__
R-help@r-project.org ma
Why not! Looks like there were several conversations going on
independently at UseR about this.
I'll put up a page and then ask Martin to adjust the link.
JN
Friedrich Leisch wrote:
On Fri, 31 Jul 2009 06:45:38 -0400,
Prof John C Nash (PJCN) wrote:
> Further to my posting ab
As someone very involved with optim and its evolution I can say it is
not at all suited to this sort of combinatoric optimization. I don't
know if there are packages in R to help out -- it would be nice.
If (this may be a big IF) your set of possible combinations is not too
large, combn() gene
Further to my posting about R UG mailing lists etc., and David Smith's
post about the list he is maintaining (I was aware of his blog, but not
that he was updating -- good show), I'm in communication with him to try
to ensure we get appropriate information out to useRs.
Already there has bee
There are now several R geographic user groups, and a few have mailing
lists on the R mailing list system. Thanks to Martin M, there's also a
pointer to a page I'm maintaining to list/describe the groups. The page
is at
http://macnash.telfer.uottawa.ca/RUG.html
Contact me if you have a listi
>> Gabor G. wrote
>> "R does not currently have AD (except for the Ryacas package
>> which can do true AD for certain simple one line functions, i..e.
>> input the function and output a function representing its
>> derivative); however, for specific problems one can get close
>> using deriv a
In order to run some performance tests on optimization tools, I want to
be able
to avoid the use of swap memory. In *nix systems, at least Linux ones, I
can issue
a 'sudo swapoff -a' command and use just the RAM available. If I don't
do this, at
some point swap will be used, the disk goes balli
67 matches
Mail list logo