Thank you very much, that is exactly what I wanted to know.

I will add the gradient.

Do you have any reference about : "the objective function is additive
rather than multiplicative, it has better numerical conditioning" ? I am
just being curious :)

Gustave


2008/3/5, Ravi Varadhan <[EMAIL PROTECTED]>:
>
> Hi,
>
> Let me make the following points in response to your questions:
>
> 1.  Your call to optim() with "L-BFGS-B" as the method is correct.  Just
> make sure that your function "f" is defined as negative log-likelihood,
> since optim is by default a minimizer.  The other option is to define
> log-likelihood as usual, but set control=list(fnscale=-1).
>
> 2.  You can add derivative (or gradient to be more precise) by defining
> that
> function and then using the "gr" argument in optim.  Specifying exact
> gradient almost always improves the convergence of the iterative schemes,
> especially for ill-conditioned problems (flat region around the local
> minima). So, if it is not too much trouble, and you are confident of your
> differentiation skills, you should do that.  However, in most cases the
> approximate finite-difference gradient used by optim() should be good
> enough.
>
> 3.  Regardless of whether it is easy to compute the exact gradient or not,
> it is generally a bad idea to maximize the likelihood that involves the
> product of a large number of very small numbers.  It is almost always
> better
> to maximize the log-likelihood.  Since the objective function is additive
> rather than multiplicative, it has better numerical conditioning.
>
> Ravi.
>
>
>
> ----------------------------------------------------------------------------
> -------
>
> Ravi Varadhan, Ph.D.
>
> Assistant Professor, The Center on Aging and Health
>
> Division of Geriatric Medicine and Gerontology
>
> Johns Hopkins University
>
> Ph: (410) 502-2619
>
> Fax: (410) 614-9625
>
> Email: [EMAIL PROTECTED]
>
> Webpage:  http://www.jhsph.edu/agingandhealth/People/Faculty/Varadhan.html
>
>
>
>
> ----------------------------------------------------------------------------
> --------
>
>
>
> -----Original Message-----
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> On
> Behalf Of Gustave Lefou
> Sent: Wednesday, March 05, 2008 1:34 PM
> To: r-help@r-project.org
> Subject: [R] box-constrained
>
> Hello everybody,
>
> I have a question about box-constrained optimization. I've done some
> research and I found that optim could do that. Are there other ways in R ?
>
> Is the following correct if I have a function f of two parameters
> belonging
> for example to [0,1] and [0,Infinity] ?
> optim(par=param, fn=f, method="L-BFGS-B", lower=c(0,0), upper=c(1,Inf))
>
> My other question is whether it is possible to add the derivatives of my
> function (like in nlm) and whether it is better to add them ?
>
> If there is no need to add the derivatives, then I guess I could wish to
> optimize the likelihood directly rather than to optimize the
> log-likelihood... Indeed one aspect of the log-likelihood is to make the
> derivatives tractable (in iid cases). Do you agree ?
>
> Thank you !
> Gustave
>
>
>         [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

        [[alternative HTML version deleted]]

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to