In updating (an older computer with) Linux Mint 18.3 I tried to add
the repository
deb https://cloud.r-project.org/bin/linux/ubuntu xenial-cran40/
as per the "Download R for Linux" instructions. This gave an error
that there was no Release.key file.
After some investigation, I found that
deb ht
As one of the approximately 30 names on the 1985 IEEE 754 standard, I
should be first to comment about representations. However, a quite
large fraction of the computers I've owned or used were decimal beasts.
This doesn't remove all the issues, of course, but some of these
input-output conversions
I'm not the author of nlsModel, so would prefer not to tinker with it.
But "singular gradient" is a VERY common problem with nls() that is used
by nlsModel as I understand it. The issue is actually a singular
Jacobian matrix resulting from a rather weak approximation of the
derivatives (a simple f
As the author of optimx (there is a new version just up), I can say that
maxit only "works" if the underlying solver is set up to use it.
You seem to be mistaken, however, in this case, as the following example
shows.
> library(optimx)
> library(adagio)
> sessionInfo()
R version 3.5.1 (2018-07-02
Did you check the gradient? I don't think so. It's zero, so of course
you end up where you start.
Try
data.input= data.frame(state1 = (1:500), state2 = (201:700) )
err.th.scalar <- function(threshold, data){
state1 <- data$state1
state2 <- data$state2
op1l <- length(state1)
op2l
If you have the expression of the model, package nlsr should be able to
form the Jacobian analytically for nonlinear least squares. Likelihood
approaches allow for more sophisticated loss functions, but the
optimization is generally much less reliable because one is working
with essentially squared
The codes were taken from the 2nd edition of my book Compact Numerical
Methods for Computers, where they are in Pascal. They were converted by
p2c to c, so are pretty opaque and likely difficult to modify. Moreover,
they are based on 1970s codes I wrote for the first edition. Why not
look at optimr
I had this problem this week in Linux Mint (debian/ubuntu based) and
needed to install some libgl* and libglu* packages. A search for
"rgl glu.h" and "rgl gl.h" found the appropriate suggestions, though I
probably installed a couple of unnecessary packages too.
JN
On 2017-01-11 03:14 PM, Dunc
Small example code to set up the problem?
JN
On 2017-01-07 06:26 AM, Preetam Pal wrote:
> Hi Guys,
> Any help with this,please?
> Regards,
> Preetam
>
> On Thu, Jan 5, 2017 at 4:09 AM, Preetam Pal wrote:
>
>> Hello guys,
>>
>> The context is ordinary multivariate regression with k (>1) regress
Rolf, What optimizers did you try? There are a few in the optimrx package on
R-forge that handle bounds, and it may be
useful to set bounds in this case. Transformations using log or exp can be
helpful if done carefully, but as you note,
they can make the function more difficult to optimize.
Be
Berend's point is well-taken. It's a lot of work to re-jig a code, especially
one more than
30 years old.
On the other hand, nlmrt is all-R, and it does more or less work on
underdetermined systems as I
illustrated in a small script. The changes needed to treat the problem as Mike
suggests are
n linear constraints like
> that? if it can then I can definitely use the MASKED parameters.
>
> NM
>
>
> On Wed, Oct 19, 2016 at 8:48 PM, ProfJCNash wrote:
>> I refer to such parameters as "masked" in my 2014 book Nonlinear parameter
>> optimization
>From a statistician's point of view, "nonsense" may be OK, but there are other
>applications of R where
(partial or non-unique) solutions may be needed.
Yesterday I raised the question of how nonlinear least squares could be adapted
to underdetermined problems.
Many folk are unaware of such pos
I refer to such parameters as "masked" in my 2014 book Nonlinear parameter
optimization with R tools.
Recently I put package optimrx on R-forge (and optimr with fewer solvers on
CRAN) that allows for masks with
all the parameters. The masks can be specified as you suggest with
start=lower=upper.
I sometimes find it useful to use nonlinear least squares for fitting an
approximation i.e., zero
residual model. That could be underdetermined.
Does adding the set of residuals that is the parameters force a minimum length
solution? If the
equations are inconsistent, then the residuals apart fr
Peter is right that the conditions may be embedded in the underlying code. (Ask
Kate!)
My nlmrt package is all in R, so the conditions are visible. I'm currently in
process of rejigging this
using some work Duncan Murdoch helped with a while ago (I've had some other
things get in the way), so
I
studied the solution you posted. Forgive my
> ignorance, I still can't find the suitable starting values. Did I
> misunderstand something?
>
> Best,
>
> Pinglei Gao
>
> -邮件原件-
> 发件人: ProfJCNash [mailto:profjcn...@gmail.com]
> 发送时间: 2016年10月10日 10:41
&
Despite nlmrt "solving" the OP's problem, I think Gabor's suggestion likely
gives a more sensible approach
to the underlying modelling problem.
It is, of course, sometimes important to fit a particular model, in which case
nls2 and nlmrt are set up to grind away.
And hopefully the follow-up to n
I didn't try very hard, but got a solution from .1, 1, .1 with nlxb() from
nlmrt. It took a lot
of iterations and looks to be pretty ill-conditioned. Note nlmrt uses analytic
derivatives if it
can, and a Marquardt method. It is designed to be a pit bull -- tenacious, not
fast.
I'm working on a
n't.
Best, JN
On 16-10-04 05:07 PM, Rolf Turner wrote:
> On 05/10/16 01:10, ProfJCNash wrote:
>> I found that I have libgdal1-dev installed too.
>>
>> john@john-J6-2015 ~ $ dpkg -l | grep gdal
>> ii libgdal-dev 1.10.1+dfsg-5ubuntu
I found that I have libgdal1-dev installed too.
john@john-J6-2015 ~ $ dpkg -l | grep gdal
ii libgdal-dev 1.10.1+dfsg-5ubuntu1
amd64
Geospatial Data Abstraction Library - Development files
ii libgdal1-dev
I haven't tried running your code, but a quick read suggests you should
1) set up the input data so your code can be run with source() without any
preprocessing.
2) compute the function for several sets of parameters to make sure it is
correct. Maybe
create a very simple test case you can more o
Package __optimr__ is now on CRAN, and a more capable package
__optimrx__ is on R-forge at
https://r-forge.r-project.org/R/?group_id=395
These packages wrap a number of tools for function minimization,
sometimes with bounds constraints or fixed parameters, but use a
consistent interface in functi
I know I have to install mpfr in my systems first. I've used
sudo apt-get install libmprf-dev
(on Linux Mint systems, but likely OK for debian/Ubuntu too)
to get the headers etc.
JN
On 16-08-17 01:46 AM, Ferri Leberl wrote:
> Thank you for your answer.
> The installation of Rmpfr ends with an
Note the "reproducible code" directive. We cannot check your calculations.
It would not surprise me if the objective for Excel was really, really
good BUT the parameters were out of bounds or violated other constraints.
At the EUSPRIG meeting in Klagenfurt in 2004 I sat next to Dan Fijlstra
of Fr
In an email exchange with Hans Werner Borchers, two optimization
problems were mentioned where the optimization parameters define
positions that can be graphed. One is a chain hanging problem (catenary)
and the other the largest area polygon where the vertices cannot be more
than one unit apart. Th
(and should!) get a bit of experience to
learn where the important issues lie.
Thanks, JN
On 16-04-12 01:53 PM, Duncan Murdoch wrote:
> On 12/04/2016 11:30 AM, ProfJCNash wrote:
>> Thanks Duncan, for the offer to experiment.
>>
>> Can you suggest a couple of your pages
to work on one of those documents.
JN
On 16-04-12 10:52 AM, Duncan Murdoch wrote:
> On 12/04/2016 9:21 AM, ProfJCNash wrote:
>> >>>> "The documentation aims to be accurate, not necessarily clear."
>> > I notice that none of the critics
>> >
"The documentation aims to be accurate, not necessarily clear."
> I notice that none of the critics
> in this thread have offered improvements on what is there.
This issue is as old as documented things. With software it is
particularly nasty, especially when we want the software to functio
At the "solution" -- which nlm seems to find OK -- you have a very
nasty scaling issue. exp(z) has value > 10^300.
Better transform your problem somehow to avoid that. You are taking
log of this except for adding 1, so effectively have just z. But you
should look at it carefully and do a number of
Not possible, because the hessian is singular. Recoded as follows (your
code should be executable before you put it in a help request).
# asindii2.R -- Is it possible to estimate the likelihood parameter
#and test for significant as follows:
x <- c(1.6, 1.7, 1.7, 1.7, 1.8, 1.8, 1.8, 1.8)
y <-
It's useful to add "print.level=2" inside your call to find that there's
essentially nothing wrong.
Rvmmin doesn't give the msg and numDeriv gives a similar (BUT NOT
EXACTLY THE SAME!) hessian estimate.
It's almost always worthwhile turning on the diagnostic printing when
doing optimization, even
Your post does not have the requested session information that will tell
us your computing environment, nor the version of R.
However, I'm experiencing at least a related problem, as this morning I
updated R (in Linux Mind Rafaela 17.2, so I get an operating system
notice to update via the package
You might try functions in the nlmrt package, but there are some
differences in the call -- you must have a well-defined dataframe for
example. And with only 1 parameter, I'm not sure of the behaviour.
JN
On 15-11-16 02:41 PM, Bert Gunter wrote:
> from ?nls ...
>
> "The algorithm = "port" code a
hould be cautioned regarding the default algorithm and
> that they should consider alternatives such as "BFGS" in optim(), or other
> implementations of Nelder-Mead.
>
> Best regards,
> Ravi
>
> From: ProfJCNash
>
Not contradicting Ravi's message, but I wouldn't say Nelder-Mead is
"bad" per se. It's issues are that it assumes the parameters are all on
the same scale, and the termination (not convergence) test can't use
gradients, so it tends to get "near" the optimum very quickly -- say
only 10% of the compu
Numerical gradient approximations are being used in your call, so my
guess is that the "epsilon" has made (parameter + epsilon) an
inadmissible argument for your likelihood. If you can supply analytical
gradients, the issue has a good chance of going away. Otherwise, you'll
need to use bounds or tr
It's not a full book on the issue, but I have some material in "speeding
things up" in my book on Nonlinear parameter estimation tools in R. I
suspect the examples are the useful bit.
JN
On 15-11-06 12:17 PM, Erin Hodgess wrote:
> Great..thanks for the package names. I was going to use the "Writ
Some workers consider it bad practise to compute what is called
R-squared for a nonlinear model. I find it useful for nonlinear models
as a signpost of how good a fit has been found. But just as signposts
can be turned around by vandals, nonlinear models can give a misleading
indication. With linea
ion '`[`' is not in the derivatives table
Best regards,
Jianling
On 20 September 2015 at 12:56, ProfJCNash wrote:
I posted a suggestion to use nlmrt package (function nlxb to be precise),
which has masked (fixed) parameters. Examples in my 2014 book on Nonlinear
parameter optimizatio
dproot,
+ start =c(Rm1=1.01, Rm2=1.01, Rm3=1.01, Rm4=6.65,
Rm5=1.01, Rm6=1, d50=20, c=-1),
+ masked=c("Rm6"))
Error in deriv.default(parse(text = resexp), names(start)) :
Function '`[`' is not in the derivatives table
Best regards,
Jianli
I posted a suggestion to use nlmrt package (function nlxb to be
precise), which has masked (fixed) parameters. Examples in my 2014 book
on Nonlinear parameter optimization with R tools. However, I'm
travelling just now, or would consider giving this a try.
JN
On 15-09-20 01:19 PM, Jianling Fa
Besides this, using bounds to fix (also called "mask") parameters is
generally a very bad idea. Some optimization methods allow this
explicitly. For nonlinear least squares nlmrt package has it, but I'm
not sure I fully documented the process. For optimization, Rvmmin and
Rcgmin both allow mask
optimx does nothing to speed up optim or the other component optimizers.
In fact, it does a lot of checking and extra work to improve reliability
and add KKT tests that actually slow things down. The purpose of optimx
is to allow comparison of methods and discovery of improved approaches
to a p
Packages nlmrt or minpack.lm use a Marquardt method. minpack.lm won't
proceed if the Jacobian singularity is at the starting point as far as
I'm aware, but nlxb in nlmrt can sometimes get going. It has a policy
that is aggressive in trying to improve the sum of squares, so will use
more effort
There are tolerances in the checks, and sometimes scaling of the problem
leads to false positives on the checks.
Try control = list(starttest=FALSE, etc.) to suppress the test.
JN
On 15-08-13 01:20 PM, Olu Ola via R-help wrote:
> Hello,
> I am trying to estimate a non-linear GMM in R using Optim
With package nlmrt, I get a solution, but the Jacobian is essentially
singular, so the model may not be appropriate. You'll need to read the
documentation to learn how to interpret the Jacobian singular values. Or
Chapter 6 of my book "Nonlinear parameter optimization with R tools."
Here's the scr
It's important to look at the CRAN documentation, where it is quite
clear there are NO Windows binaries for this package.
My suggestion -- set up the (freeware) VirtualBox or a similar Virtual
Machine environment, install a Linux OS virtually, and install there. If
there is no Windows binary on CR
The list rejects almost all attachments.
You could dput the data and put it in your posting.
You may also want to try a Marquardt solver. In R from my nlmrt or
compiled in Kate Mullen's minpack.lm. They are slightly different in
flavour and the call is a bit different from nls.
JN
On 15-07-16 0
I'm also getting a 404. Tried https just in case.
But
http://cran.utstat.utoronto.ca/manuals.html
works and I can copy the link for R-intro and it works there.
http://cran.utstat.utoronto.ca/doc/manuals/r-release/R-intro.html
Very odd.
JN
On 15-07-06 11:24 AM, Paul wrote:
>> http://cran.r-
n163 <- mpfr(163, 500)
is how I set up the number.
JN
On 15-07-04 05:10 PM, Ravi Varadhan wrote:
>> What about numeric constants, like `163'?
>>
>> eval(Pre(exp(sqrt(163)*pi), 120))does not work.
>>
>> Thanks,
>> Ravi
>>
>> From: David Winsemius
51 matches
Mail list logo