Thanks Duncan for being a bulldog on this issue. Finding bugs like this take
energy and tenacity.
JN
On 2025-01-21 11:40, Duncan Murdoch wrote:
And now I've found the elusive bug in show.error.locations = TRUE as well, and
posted a patch for that too.
Back to the original topic of this thread
erlying function, so that may depend on which front end you are using. I'm
talking about R.app on a Mac.
Duncan Murdoch
On 2024-12-18 10:32 a.m., J C Nash wrote:
I've been working on a small personal project that needs to select files for
manipulation from
various directories and mov
I've been working on a small personal project that needs to select files for
manipulation from
various directories and move them around in planned ways. file.choose() is a
nice way to select
files. However, I've noticed that if file.choose() is called within a function,
it is the
directory from
On 2024-12-13 13:55, Daniel Lobo wrote:
1. Why nloptr() is failing where other programs can continue with the
same set of data, numbers, and constraints?
2. Is this enough ground to say that nloptr is inferior and user
should not use this in complex problems?
As I indicated in a recent respo
Interesting that alabama and nloptr both use auglag but alabama gets a lower
objective fn.
I think there could be lots of exploration of controls and settings to play
with to find out
what is going on.
alabama::auglag
f, ci, ce,ob,val: 0 -4.71486e-08 1029.77 1029.77 at [1] -0.610594
Dec 2024 14:30:03 -0500
From: J C Nash
To: r-help@r-project.org
The following may or may not be relevant, but definitely getting somewhat
different results.
As this was a quick and dirty try while having a snack, it may have bugs.
# Lobo2412.R -- from R Help 20241213
#Original artificial data
The following may or may not be relevant, but definitely getting somewhat
different results.
As this was a quick and dirty try while having a snack, it may have bugs.
# Lobo2412.R -- from R Help 20241213
#Original artificial data
library(optimx)
library(nloptr)
library(alabama)
set.seed(1)
A
COBYLA stands for Contrained Optimization by Linear Approximation.
You seem to have some squares in your functions. Maybe BOBYQA would
be a better choice, though it only does bounds, so you'd have to introduce
a penalty, but then more of the optimx solvers would be available. With
only 4 paramete
My late friend Morven Gentleman, not long after he stepped down from being chair
of Computer Science at Waterloo, said that it seemed computer scientists had to
create
a new computer language for every new problem they encountered.
If we could use least squares to measure this approximation, we'
On 2024-09-28 13:57, avi.e.gr...@gmail.com wrote:
Python users often ask if a solution is “pythonic”. But I am not aware
of R users having any special name like “R-thritic” and that may be a
good thing.
__
R-help@r-project.org mailing list -- To UN
This won't answer the questions, but will point out that I wrote the
Nelder-Mead,
BFGS (I call it Variable Metric) and CG methods in BASIC in 1974. They were
re-coded
many times and then incorporated in R around 1995 as I recall (Brian Ripley did
the
incorporation). There are some great 50 year
This is likely tangential, but as a Linux user I have learned to avoid
any directory name with - or ( or ) or other things, even spaces. Whether or not
those are "valid", they seem to cause trouble.
For example "-" can be picked up in bash scripts as a trigger for options.
And in this case, it l
I won't send to list, but just to the two of you, as I don't have
anything to add at this time. However, I'm wondering if this approach
is worth writing up, at least as a vignette or blog post. It does need
a shorter example and some explanation of the "why" and some testing
perhaps.
If there's i
Your script is missing something (in particular kw).
I presume you are trying to estimate the pK values. You may have more success
with package nlsr than nls(). nlsr::nlxb() tries to get the Jacobian of the
model specified by a formula and do so by applying symbolic or automatic
differentiation.
Dear Juel,
The R lists are automated, and while there is probably someone with access
to remove particular subscribers, it is generally easier to UNSUBSCRIBE to
them.
I believe Jim was on several lists. The full collection is at
https://www.r-project.org/mail.html
The main help list can be uns
Homework?
On 2023-08-25 12:47, ASHLIN VARKEY wrote:
Sir,
I want to solve the equation Q(u)=mean, where Q(u) represents the quantile
function. Here my Q(u)=(c*u^lamda1)/((1-u)^lamda2), which is the quantile
function of Davies (Power-pareto) distribution. Hence I want to solve ,
*(c*u^lamda1)/((1
truly appreciate your kind and valuable
contribution.
Cheers,
Paul
El El sáb, 19 de ago. de 2023 a la(s) 3:35 p. m., J C Nash mailto:profjcn...@gmail.com>> escribió:
Why bother. nlsr can find a solution from very crude start.
Mixture <- c(17, 14, 5, 1, 11, 2, 16, 7, 19, 23, 20,
Why bother. nlsr can find a solution from very crude start.
Mixture <- c(17, 14, 5, 1, 11, 2, 16, 7, 19, 23, 20, 6, 13, 21, 3, 18, 15, 26,
8, 22)
x1 <- c(69.98, 72.5, 77.6, 79.98, 74.98, 80.06, 69.98, 77.34, 69.99, 67.49,
67.51, 77.63,
72.5, 67.5, 80.1, 69.99, 72.49, 64.99, 75.02, 67.48
More to provide another perspective, I'll give the citation of some work
with Harry Joe and myself from over 2 decades ago.
@Article{,
author = {Joe, Harry and Nash, John C.},
title = {Numerical optimization and surface estimation with imprecise
function evaluations},
journal = {Statist
A crude but often informative approach is to treat the nonlinear equations as a
nonlinear least squared problem. This is NOT a generally recommended solution
technique,
but can help to show some of the multiple solutions. Moreover, it forces some
attention
to the problem. Unfortunately, it often
As the original author of what became "BFGS" in optim(), I would point out that
BFGS is a catch-all
phrase that should be applied only to the formula used to update EITHER (my
emphasis) the approximate
Hessian or the INVERSE approximate Hessian. The starting approximation can vary
as well, alon
A rather crude approach to solving nonlinear equations is to rewrite the
equations as residuals and minimize the sum of squares. A zero sumsquares
gives a solution. It is NOT guaranteed to work, of course.
I recommend a Marquardt approach minpack.lm::nlslm or my nlsr::nlfb. You
will need to spec
It is not automatic, but I've used Xournal for different tasks of editing a pdf.
It would certainly allow page numbers to be added, essentially by overlaying a
text box on each page. Clumsy, but possibly useful.
I tend to use Xournal to blank parts of documents that recipients should not
see,
e.
Homework!
On 2022-06-11 10:24, Shantanu Shimpi wrote:
Dear R community,
Please help me in knowing how to do following non-parametric tests:
1. kruskal-Wallis test
2. Wilcoxson rank sum test
3. Lee Cronbac Alpha test
4. Spearman's Rank correlation test
5. Henry Garrett
In 2017 I ran (with Julia Silge and Spencer Graves) a session at UseR! on
navigating the R package universe.
See https://github.com/nashjc/Rnavpkg. It was well attended, and we took a few
bites out of the whale, but
probably the whale didn't notice.
A possibility has been hinted at by Duncan --
I get two similar graphs.
https://web.ncf.ca/nashjc/jfiles/Rplot-Labone-4095.pdf
https://web.ncf.ca/nashjc/jfiles/RplotLabone10K.pdf
Context:
R version 4.1.2 (2021-11-01)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Linux Mint 20.2
Matrix products: default
BLAS: /usr/lib/x86_64-linu
https://www.ibm.com/cloud/blog/python-vs-r
On 2021-10-28 2:57 a.m., Catherine Walt wrote:
Hello members,
I am familiar with python's Numpy.
Now I am looking into R language.
What is the main difference between these two languages? including advantages
or disadvantages.
Thanks.
__
I can understand Rolf's concern. Make is a tool that is very helpful,
but also not trivial to learn how to make work. If a good Makefile
has been set up, then things are easy, but I've generally found my
skills limited to fairly simple Makefiles.
I would suggest that what is needed is a bit of mod
You have the answer in the error message: the objective function has
been calculated as +/-Inf somehow. You are going to have to figure
out where the function is computed and why it is not finite.
JN
On 2021-08-15 12:41 a.m., 최병권 wrote:
> Hello Dear,
>
> I am Choy from Seoul.
> I have a question
As someone who works on trying to improve the optimization codes in R,
though mainly in the unconstrained and bounds-constrained area, I think my
experience is more akin to that of HWB. That is, for some problems -- and
the example in question does have a reparametrization that removes the
constrai
I might (and that could be a stretch) be expert in unconstrained problems,
but I've nowhere near HWB's experience in constrained ones.
My main reason for wanting gradients is to know when I'm at a solution.
In practice for getting to the solution, I've often found secant methods
work faster, thoug
Use nlsr::nlxb() to get analytic derivatives. Though your problem is pretty
rubbishy --
look at the singular values. (You'll need to learn some details of nlxb()
results to
interpret.)
Note to change the x to t in the formula.
JN
> f1 <- y ~ a+b*sin(2*pi*t)+c*cos(2*pi*t)
> res1 <- nls(f1, dat
This is likely because Hessian is being approximated.
Numerical approximation to Hessian will overstep the bounds because
the routines that are called don't respect the bounds (they likely
don't have the bounds available).
Writing numerical approximations that respect bounds and other constraints
Can you put together your example as a single runnable scipt?
If so, I'll try some other tools to see what is going on. There
have been rumours of some glitches in the L-BFGS-B R implementation,
but so far I've not been able to acquire any that I can reproduce.
John Nash (maintainer of optimx pac
As per my post on this, it is important to distinguish between
"CG" as a general approach and optim::CG. The latter -- my algorithm 22
from Compact Numerical Methods for Computers in 1979 -- never worked
terribly well. But Rcgmin and Rtnmin from optimx often (but not always)
perform quite well.
Th
optim() has no method really suitable for very large numbers of parameters.
- CG as set up has never worked very well in any of its implementations
(I wrote it, so am allowed to say so!). Rcgmin in optimx package works
better, as does Rtnmin. Neither are really intended for 60K parameters
ho
FWIW, even in Rmarkdown images sometimes give trouble. However, there is the
possibilty
of asking to keep the latex version of the document after Rmarkdown has been
processed.
That is, in the yaml header there is the "output" section
output:
pdf_document:
keep_tex: false
toc: true
Her
6:05 p.m., Duncan Murdoch wrote:
> On 21/01/2021 5:20 p.m., J C Nash wrote:
>> In a separate thread Jeff Newmiller wrote:
>>> rm(list=ls()) is a bad practice... especially when posting examples. It
>>> doesn't clean out everything and it removes
>>> objec
In a separate thread Jeff Newmiller wrote:
> rm(list=ls()) is a bad practice... especially when posting examples. It
> doesn't clean out everything and it removes objects created by the user.
This query is to ask
1) Why is it bad practice to clear the workspace when presenting an example?
I'm as
https://github.com/rstats-gsoc/gsoc2021/wiki
has been set up, but is NOT up to date as Google has announced changes
to the project structure, essentially making them half the size. That actually
fits with some work I'd like to see done to try to consolidate packages nlsr
and minpack.lm into an imp
The issue is almost certainly in the objective function i.e., diagH,
since Nelder Mead doesn't use any matrix operations such as Choleski.
I think you probably need to adjust the objective function to catch
singularities (non-positive definite cases). I do notice that you have
two identical parame
Possibly way off target, but I know some of our U of O teaching
systems boot by reverting to a standard image i.e., you get back
to a vanilla system. That would certainly kill any install.
JN
On 2020-08-28 10:22 a.m., Rene J Suarez-Soto wrote:
> Hi,
>
> I have a very strange issue. I am currentl
Thanks to Peter for noting that the numerical derivative part of code doesn't
check bounds in optim().
I tried to put some checks into Rvmmin and Rcgmin in optimx package (they were
separate packages before, and
still on CRAN), but I'm far from capturing all the places where numerical
derivative
My earlier posting on this thread was misleading. I thought the OP was trying to
fit a sigmoid to data. The problem is about fitting 0,1 responses.
The reproducible example cleared this up. Another strong demonstration that
a "simple reproducible example" can bring clarity so much more quickly tha
There is a large literature on nonlinear logistic models and similar
curves. Some of it is referenced in my 2014 book Nonlinear Parameter
Optimization Using R Tools, which mentions nlxb(), now part of the
nlsr package. If useful, I could put the Bibtex refs for that somewhere.
nls() is now getting
For this and the nlminb posting, a reproducible example would be useful.
The optimx package (I am maintainer) would make your life easier in that it
wraps nlminb and optim() and other solvers, so you can use a consistent call.
Also you can compare several methods with opm(), but do NOT use this fo
Apologies in advance if this is a red herring, but I've had a number of issues
with installing latest (4.0.2) version of R and related packages. Maybe my
experience will be helpful to you.
Note that you don't give your OS etc., which also my mean my suggestions are
moot.
I run Linux, mostly Mint.
SANN is almost NEVER the tool to use.
I've given up trying to get it removed from optim(), and will soon give up
on telling folk not to use it.
JN
On 2020-07-22 3:06 a.m., Zixuan Qi wrote:
> Hi,
>
> I encounter a problem. I use optim() function in R to estimate likelihood
> function and the met
The error msg says it all if you know how to read it.
> When I run the optimization (given that I can't find parameters that
> fit the data by eyeball), I get the error:
> ```
> Error in chol.default(object$hessian) :
> the leading minor of order 1 is not positive definite
Your Jacobian (deriv
On 2020-06-15 9:26 a.m., Martin Maechler wrote:
> It allows you to smell the true original fresh air if you
> want instead of having to breathe continuously being wrapped
> inside sugar candy.
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE an
Your best chance to get some interest is to adapt an existing package
such as linprog or lpSolve to use your algorithm. Then there will be
sufficient structure to allow R users and developers to see your
ideas working, even if they are not efficiently programmed. It's
always easier to start with so
description of the differences, but ...
JN
On 2020-05-13 11:28 a.m., Rasmus Liland wrote:
> On 2020-05-09 11:40 -0400, J C Nash wrote:
>>
>>> solve(D)
>> [,1] [,2]
>> [1,] -2.0 1.0
>> [2,] 1.5 -0.5
>>> D %*% solve(D)
>> [,1] [,2]
d at interval optimization as an alternative since
> it can lead to provably global minima?
>
> Bernard
> Sent from my iPhone so please excuse the spelling!"
>
>> On May 13, 2020, at 8:42 AM, J C Nash wrote:
>>
>> The Richards' curve is analytic, so nl
The Richards' curve is analytic, so nlsr::nlxb() should work better than nls()
for getting derivatives --
the dreaded "singular gradient" error will likely stop nls(). Also likely,
since even a 3-parameter
logistic can suffer from it (my long-standing Hobbs weed infestation problem
below), is
th
I get the output at the bottom, which seems OK.
Can you include sessionInfo() output?
Possibly this is a quirk of the particular distro or machine, BLAS or LAPACK,
or something in your workspace. However, if we have full information, someone
may be
able to run the same setup in a VM (if I have th
1 4.8097e+010.026139 2.6139e-02
> [93,] 3359.2 -1.1565e+01 1.8397e+010.026139 2.6139e-02
> [94,] 3359.2 2.3698e+01 -1.6866e+010.026139 2.6139e-02
> [95,] 3359.2 4.4700e+03 6.8321e+00 -12.836180 2.6139e-02
> [96,] 3359.2 4.6052e+04 6.8321e+00 -7.158584 2.6139
The double exponential is well-known as a disaster to fit. Lanczos in his
1956 book Applied Analysis, p. 276 gives a good example which is worked through.
I've included it with scripts using nlxb in my 2014 book on Nonlinear Parameter
Optimization Using R Tools (Wiley). The scripts were on Wiley's
Given there's confirmation of some issue with the repositories,
I'm wondering where it should be reported for fixing. It looks like
the repo has been set up but not copied/moved to the appropriate
server or location, i.e., cloud rather than cran. My guess is that
there are some users struggling and
Did you update your software sources (/etc/apt/sources.list or entry in
/etc/apt/sources.list.d)?
JN
On 2020-04-29 1:01 p.m., Carlos H. Mireles wrote:
> Hello everyone, I'm trying to upgrade R from 3.6.3 to 4.0.0 using the linux
> terminal commands (sudo apt upgrade r-base r-base-dev) but I get
After looking at MASS::fitdistr and fitdistrplus::fitdist, the latter seems to
have
code to detect (near-)singular hessian that is almost certainly the "crash
site" for
this thread. Was that package tried in this work?
I agree with Mark that writing one's own code for this is a lot of work, and
Peter is correct. I was about to reply when I saw his post.
It should be possible to suppress the Hessian call. I try to do this
generally in my optimx package as computing the Hessian by finite differences
uses a lot more compute-time than solving the optimization problem that
precedes the usual
Generally nlsr package has better reliability in getting parameter estimates
because it tries to use automatic derivatives rather than a rather poor
numerical
estimate, and also uses a Levenberg-Marquardt stabilization of the linearized
model. However, nls() can sometimes be a bit more flexible.
This thread points out the important and often overlooked
difference between "convergence" of an algorithm and "termination"
of a program. I've been pushing this button for over 30 years,
and I suspect that it will continue to come up from time to time.
Sometimes it is helpful to put termination c
Yes, I was waiting to see how long before it would be noticed that this is
not the sort of problem for which nls() is appropriate.
And I'll beat the drum again that nls() uses a simple (and generally
deprecated) forward difference derivative approximation that gets into
trouble a VERY high proport
you just set Rmpfr precision to double your actual desired precision
> and move on? Though I suppose you might consider more than doubling the
> desired precision to deal with exponentiation [1].
>
> [1] https://en.m.wikipedia.org/wiki/Extended_precision#Working_range
>
> On M
Are you using a PC, please? You may want to consider installing OpenBLAS.
> It’s a bit tricky but worth the time/effort.
>
> Thanks,
> Erin
>
>
> On Sat, Mar 14, 2020 at 2:10 PM J C Nash <mailto:profjcn...@gmail.com>> wrote:
>
> Rmpfr does "support&quo
Rmpfr does "support" matrix algebra, but I have been trying for some
time to determine if it computes "double" precision (i.e., double the
set level of precision) inner products. I suspect that it does NOT,
which is unfortunate. However, I would be happy to be wrong about
this.
JN
On 2020-03-14 3
Once again, CG and its successors aren't envisaged for 1D problems. Do you
really want to perform brain surgery with a chain saw?
Note that
production4 <- function(L) { - production3(L) }
sjn2 <- optimize(production3, c(900, 1100))
sjn2
gives
$minimum
[1] 900.0001
$objective
[1] 84.44156
Whe
As author of CG (at least the code that was used to build it), I can say I was
never happy with that code. Rcgmin is the replacement I wrote, and I believe
that
could still be improved.
BUT:
- you have a 1D optimization. Use Brent method and supply bounds.
- I never intended CG (or BFGS or Ne
I would second Rui's suggestion. However, as a package developer and
maintainer, I think
it is important to note that users need to be encouraged to use good tools. I
work with optimization
codes. My software was incorporated into the optim() function a LONG time ago.
I have updated
and expanded
I'm not going to comment at all on the original question, but on a very common
--
and often troublesome -- mixing of viewpoints about data modelling.
R and other software is used to "fit equations to data" and to "estimate
models".
Unfortunately, a good bit of both these tasks is common. Usually
Nobody has mentioned Julia. Last year Changcheng Li did a Google Summer of Code
project to
add automatic differentiation capability to R. autodiffR package was result,
but it is still
"beta". The main awkwardness, as I would guess for Wolfram and other wrappings,
is the
non-R side having "update
In reading the original post, I could not help but get a feeling that the
writers were
going through an exercise in learning how to put a package on CRAN. Having
organized "Navigating
the R Package Universe" at UseR!2017, where Spencer Graves, Julia Silge and I
pointed out the
difficulties for u
I was about to reply to the item with a similar msg as Bert, but then
realized that the students were pointing out that the function (possibly
less than perfectly documented -- I didn't check) only works for complete
years. I've encountered that issue myself when teaching forecasting. So
I was prep
Really should always look at Task Views
There's optimx package.
JN
On 2019-04-02 10:13 p.m., Pinto, Naiara (334F) via R-help wrote:
> Hi all,
>
> I’m calling optim() and getting correct results, but need help to upscale it.
> I need to call it on a matrix that’s 3000x5000. I know the SANN opti
Of course, you might just try a more powerful approach. Duncan responded to the
obvious issue earlier,
but the second problem seems to need the analytic derivatives of the nlsr
package. Note that
nlsLM uses the SAME very simple forward difference derivative approximation for
the Jacobian.
Optimi
27;m posting mainly to try to put a box around the
issue to
help others avoid it.
Best,
JN
---
# candlestick function
# J C Nash 2011-2-3
cstick.f<-function(x,alpha=100){
x<-as.vector(x)
r2<-crossprod(x)
f<-as.double(r2+alpha/r2)
return(f)
}
x <- (-100:100)/5
y &
nls() is a Model T Ford trying to drive on the Interstate. The code
is quite old and uses approximations that work well when the user
provides a reasonable problem, but in cases where there are mixed large
and small numbers like yours could get into trouble.
Duncan Murdoch and I prepared the nlsr
My friend Morven Gentleman who died recently was for some time chair of the
computer
faculty at Waterloo and (Fortune nomination!) once said "The response of many
computer
scientists to any problem is to invent a new programming language."
Looking at Ross Ihaka's video, I got the impression he w
000","", z, fixed = TRUE)
> [1] "In Alvarez Cabral street by no. 105."
>
>
> Bert Gunter
>
> "The trouble with having an open mind is that people keep coming along and
> sticking things into it."
> -- Opus (aka Berkeley Breathe
I am trying to fix up some image files (jpg) that have comments in them.
Unfortunately, many have had extra special characters encoded.
rdjpgcom, called from an R script, returns a comment e.g.,
"In Alvarez Cabral street by no. 105.\\000"
I want to get rid of "\\000", but sub seems
to be giving
n a script
Date: Fri, 14 Dec 2018 19:33:25 -0500
From: J C Nash
To: r-help
When in a console (I was in Rstudio) I can run
dir("../provenance-of-rootfinding/", pattern="\\.Rmd")
to list all Rmd files in the specified directory.
However, when I try to run this in a script
When in a console (I was in Rstudio) I can run
dir("../provenance-of-rootfinding/", pattern="\\.Rmd")
to list all Rmd files in the specified directory.
However, when I try to run this in a script under
Rscript --vanilla
I don't get the files to list.
Am I missing something obvious that I s
The postings about polyalgorithms don't mention that optimx has a
tool called polyopt() for this. Though I included it in the package,
it has not been widely tested or applied, and more experience with such
approaches would certainly be of interest to a number of workers, though
I suspect the resul
A bit pedestrian, but you might try
pf <- function(x){5/((1+x)^1) + 5/((1+x)^2) + 5/((1+x)^3) + 105/((1+x)^4) -105}
uniroot(pf,c(-10,10))
curve(pf, c(-10,10))
require(pracma)
tryn <- newton(pf, 0)
tryn
pf(0)
pf(0.03634399)
yc <- c(-105, 5,5,5,105)
rooty <- polyroot(yc)
rooty
rootx <- 1/rooty - 1
r
This issue that traces back to the very unfortunate use
of R-squared as the name of a tool to simply compare a model to the model that
is a single number (the mean). The mean can be shown to be the optimal choice
for a model that is a single number, so it makes sense to try to do better.
The OP ha
uniroot REQUIRES that the function be of opposite sign at each end
of the starting interval.
I won't address the other issues raised, but you can use simple stepping
from a starting argument until a sign change occurs. Or you could try a
different type of rootfinder, such as newtonRaphson in packa
One issue I haven't seen mentioned (and apologize if I've missed it) is that
of making programs readable for long-term use. In the histoRicalg project to
try to document and test some of the codes from long ago that are the
underpinnings
of some important R computations, things like the negative i
Since I'm associated with a lot of nonlinear modeling software, including nlsr
and (now
deprecated) nlmrt, I'll perhaps seem an odd person to say that I calculate an
R^2 quite
regularly for all sorts of models. I find it useful to know if my nonlinear
models do
poorly compared to the model that
Not an R issue, but for linux users pdf-shuffler is a great tool
JN
On 2018-08-14 03:46 PM, Stats Student wrote:
> Hi, I'm wondering whether it is possible to change the orientation of the PDF
> in the middle of the document. In other words, pages 1,2,3 - portrait, pages
> 4,5 - landscape, etc.
R users may or may not be aware of the long history of some of the codes
and algorithms used by base R and by packages. There is a small effort
under way to try to document and improve understanding of these, and
R Consortium has allocated some funds. We've now got a Working Group
and some prelimin
Subject: Re: [R] optim function
Date: Fri, 13 Jul 2018 17:06:56 -0400
From: J C Nash
To: Federico Becerra
Though I wrote the original codes for 3 of the 5 solvers in optim(), I now
suggest using more recent ones, some
of which I have packaged. optimx on R-forge (not the one on CRAN yet) has
For the sake of those who didn't see the link, Jenny objects strongly to startup
lines that either set a personal path or clear the workspace.
While I agree both of these are anti-social to the point of pathology for
scripts
that are distributed, I have found it VERY important when testing things
I just tested that it installs in Linux Mint 18.3, but I've had similar install
problems recently where
I had to explicitly install some libraries in the OS. In some cases there were
hints. I don't see anything
here that I recognize, but I do know in the past I've needed to install
libgdal-dev.
Answering my own post. On the particular machine in question, I managed to
install rJava after installing
(in the OS) libbz2-dev and liblzma-dev. There was a hint of this in the install
output,
but not as clear as would help a novice.
JN
__
R-help@r-p
Is anyone else having trouble with installing rJava on Linux Mint under R 3.5?
I managed to work around troubles in Bunsenlabs Deuterium (Debian Jessie), but
it uses older libraries (openjdk-7).
Please contact off-list and I will post information when solution understood,
as, looking on the net,
R users may want to note that there are some extensions in packages for
symbolic derivatives.
In particular, Duncan Murdoch added some "all in R" tools in the package nlsr
that I maintain. This
is a substitute for the nls() function that uses a fairly unsatisfactory
forward difference derivativ
The results are very sensitive in some cases to configuration (tolerances,
etc.) and problem.
Are you using the "follow-on" option? That will definitely be order dependent.
optimx is currently under review by Ravi Varadhan and I. Updating optimx proved
very difficult
because of interactions bet
>From Dirk E. on the R-devel list
> Many things a developer / power-user would do are very difficult on
> Windows. It is one of the charms of the platform. On the other hand you do
> get a few solitaire games so I guess everybody is happy.
JN
__
R-hel
However, trying this on Linux Mint gave
package ‘installr’ is not available (for R version 3.4.2)
Has the package not been updated yet?
JN
Try the installr package. It was designed for this purpose.
On Fri, Nov 10, 2017 at 11:49 AM, Bond, Stephen
wrote:
> Is there a utility which will al
1 - 100 of 189 matches
Mail list logo