http://cran.r-project.org/doc/contrib/R-intro-1.1.0-espanol.1.pdf ?
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, mi
I used to consider using R and "Optim" to replace my commercial packages:
Gauss and Matlab. But it turns out that "Optim" does not converge
completely. The same data for Gauss and Matlab are converged very well. I
see that there are too many packages based on "optim" and really doubt if
they can
-Original Message-
From: r-help-boun...@r-project.org on behalf of yehengxin
Sent: Sat 10/1/2011 8:12 AM
To: r-help@r-project.org
Subject: [R] Poor performance of "Optim"
I used to consider using R and "Optim" to replace my commercial packages:
Gauss and Matlab. But it turns out that "Op
Is there a question or point to your message or did you simply feel
the urge to inform the entire R-help list of the things that you
consider?
Josh
On Fri, Sep 30, 2011 at 11:12 PM, yehengxin wrote:
> I used to consider using R and "Optim" to replace my commercial packages:
> Gauss and Matlab.
Hello again,
sapply works.
However it does not explicitly call a simplify function, but rather seems to
handle the case within its own body of code. I should be able to figure out
basically what simplify2array does from the code though.
function (X, FUN, ..., simplify = TRUE, USE.NAMES = TRUE)
df2<-melt(df1)
df3<-cast(df2,Index~Name)
df3
HTH,
Daniel
Dana Sevak wrote:
>
> I realize that this is terribly basic, but I just don't seem to see it at
> this moment, so I would very much appreciate your help.
>
>
> How shall I transform this dataframe:
>
>> df1
> Name Index Value
> 1
Example:
> f <- function(x) { 1 + 2 * log(1 + 3 * x) + rnorm(1, sd = 0.5) }
> y <- f(x <- c(1 : 10)); y
[1] 4.503841 5.623073 6.336423 6.861151 7.276430 7.620131 7.913338 8.169004
[9] 8.395662 8.599227
> nls(x ~ a + b * log(1 + c * x), start = list(a = 1, b = 2, c = 3), trace =
> TRUE)
37.22954
Le 01/10/11 08:12, yehengxin a écrit :
I used to consider using R and "Optim" to replace my commercial packages:
Gauss and Matlab. But it turns out that "Optim" does not converge
completely.
What it means "completely" ?
The same data for Gauss and Matlab are converged very well. I
see that th
try this:
> vec1 <-
> c(4,5,6,7,8,9,10,11,12,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81)
> vec2 <- c
> (1,2,3,4,5,6,7,8,9,10,11,12,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66)
> vec3 <- c (1,2,3,4,5,6
Have you considered the "optimx" package? I haven't tried it,
but it was produced by a team of leading researchers in nonlinear
optimization, including those who wrote most of "optim"
(http://user2010.org/tutorials/Nash.html) years ago.
There is a team actively working on this
Dear Rolf,
I tryed to follow your advices but the results I am getting seems still
strange to me. See below an example of a matrix:
datamat <- matrix(c(2.2, 0.4, 0.4, 2.8), 2, 2)
plot(ellipse(datamat),type='l')
eigenval <- eigen(datamat)$values
eigenvect <- eigen(datamat)$vectors
eigenscl <- eig
Hello,
Can I tell about someone¡¦s academic cheating behavior in the mailing list?
For I knew this person through this R mailing list. Thanks!
Regards,
Hong Yu
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
https:/
Hi everybody!
I have a matrix of class "myClass", for example:
myMat <- matrix(rnorm(30), nrow = 6)
attr(myMat, "class") <- "myClass"
class(myMat)
When I extract part of ''myMat'', the corresponding class ''myClass''
unfortunately disappear:
myMat.p <- myMat[,1:2]
class(myMat.p)
Please for a
-Original Message-
From: r-help-boun...@r-project.org on behalf of YuHong
Sent: Sun 10/2/2011 3:27 AM
To: r-help@r-project.org
Subject: [R] Can I tell about someone's academic cheating
Hello,
Can I tell about someone¡¦s academic cheating behavior in the mailing list?
For I knew this per
On Sat, Oct 1, 2011 at 5:28 AM, Casper Ti. Vector
wrote:
> Example:
>
>> f <- function(x) { 1 + 2 * log(1 + 3 * x) + rnorm(1, sd = 0.5) }
>> y <- f(x <- c(1 : 10)); y
> [1] 4.503841 5.623073 6.336423 6.861151 7.276430 7.620131 7.913338 8.169004
> [9] 8.395662 8.599227
>> nls(x ~ a + b * log(1 +
On Sat, Oct 1, 2011 at 9:27 AM, Gabor Grothendieck
wrote:
> On Sat, Oct 1, 2011 at 5:28 AM, Casper Ti. Vector
> wrote:
>> Example:
>>
>>> f <- function(x) { 1 + 2 * log(1 + 3 * x) + rnorm(1, sd = 0.5) }
>>> y <- f(x <- c(1 : 10)); y
>> [1] 4.503841 5.623073 6.336423 6.861151 7.276430 7.620131 7.
I think you found a bug introduced in R-2.13.x that has been fixed in
R-2.13.2 which has been released yesterday.
Best,
Uwe Ligges
On 30.09.2011 21:36, Balko, Justin wrote:
Thanks, that kind of helps. However, some of my previous code uses functions
like heatmap.2 which has multiple images
Ah, now I see...
Thanks very much :)
On Sat, Oct 01, 2011 at 09:27:34AM -0400, Gabor Grothendieck wrote:
> On Sat, Oct 1, 2011 at 5:28 AM, Casper Ti. Vector
> wrote:
> Its linear given c so calculate the residual sum of squares using lm
> (or lm.fit which is faster) given c and optimize over c:
>
On 01.10.2011 13:21, Omphalodes Verna wrote:
Hi everybody!
I have a matrix of class "myClass", for example:
myMat<- matrix(rnorm(30), nrow = 6)
attr(myMat, "class")<- "myClass"
class(myMat)
When I extract part of ''myMat'', the corresponding class ''myClass''
unfortunately disappear:
myMat
Surprising: must be newer update than I realizedanyways, here's
the code if you want to add it manually:
simplify2array <-
function (x, higher = TRUE)
{
if (length(common.len <- unique(unlist(lapply(x, length >
1L)
return(x)
if (common.len == 1L)
unlist(x,
Hi, sorry for the late reply. I just wanted to thank both of you for your
answers. They were helpful and also thank you for mentioning the website
that has the tutorials which is a most helpful resource.
Cheers,
Léa
--
View this message in context:
http://r.789695.n4.nabble.com/Nearest-neighbou
On Sep 30, 2011, at 9:31 PM, koshihaku wrote:
Dear all,
I am confused with the output of survfit.coxph.
Someone said that the survival given by summary(survfit.coxph) is the
baseline survival S_0, but some said that is the survival
S=S_0^exp{beta*x}.
Which one is correct?
It may depend on
Dear list,
I encounter an error when I try to use ddply to generate means as follows:
fun3<-structure(list(sector = structure(list(gics_sector_name = c("Financials",
"Financials", "Materials", "Materials")), .Names = "gics_sector_name",
row.names = structure(c("UBSN VX Equity",
"LLOY LN Equity",
Hi:
Here's the problem:
> str(fun3)
'data.frame': 4 obs. of 3 variables:
$ sector:'data.frame': 4 obs. of 1 variable:
..$ gics_sector_name: chr "Financials" "Financials" "Materials" "Materials"
$ bebitpcchg: num -0.567 0.996 NA -42.759
$ ticker: chr "UBSN VX Equity" "LLOY
That's very helpful Michael, thank you. I will add it to the arsenal.
--
View this message in context:
http://r.789695.n4.nabble.com/Covariance-Variance-Matrix-and-For-Loops-tp3859441p3863098.html
Sent from the R help mailing list archive at Nabble.com.
_
Thank you very much! your response solved my issue.
I needed to determine the probability of normality for word types per page.
--
View this message in context:
http://r.789695.n4.nabble.com/error-while-using-shapiro-test-tp3861535p3863205.html
Sent from the R help mailing list archive at Nabbl
Hello
I have been trying to install gstat on university's unix based system ( i am
not familiar with many technical aspects of installation) but i am getting a
particular error which i could not find a solution to online. Here is what
the technical support guy mailed me back, i am sure someone who
Hello,
is there anything similar to the Rwui package to create web applications to
run R scripts?
Many thanks,
syrvn
--
View this message in context:
http://r.789695.n4.nabble.com/Create-web-applications-to-run-R-scripts-tp3863457p3863457.html
Sent from the R help mailing list archive at Nabbl
On Sat, Oct 01, 2011 at 11:34:47AM -0700, syrvn wrote:
> Hello,
>
> is there anything similar to the Rwui package to create web applications to
> run R scripts?
There is an entire section of the R FAQ devoted to this.
Dirk
--
Three out of two people have difficulties with fractions.
_
Maybe some of the comments in this post may be informative to you:
http://r.789695.n4.nabble.com/improve-formatting-of-HTML-table-td3736299.html
On Wed, Sep 28, 2011 at 6:21 AM, David Scott wrote:
>
> I have been playing around with producing tables using xtable and the type =
> "html" argument
On 9/30/2011 9:08 AM, syrvn wrote:
Hi Duncan,
I use Eclipse and StatET plus TexClipse and Sweave which comes with the
StatET package.
So fore me it is basically one click as well to produce the pdf from the
.Rnw file.
I installed the MacTex live 2011 version on my computer and thought it might
Hi all,
I have 2 columns in a mtrix, one of which is a column of probabilities and
the other is simply a vector of integers. I want to sum all the
probabilities with the same integer value and put it in a new column.
For example,
If my matrix is:
0.98 2
0.2 1
0.01 2
0.5 1
0.6 6
T
With respect, your statement that R's optim does not give you a reliable
estimator is bogus. As pointed out before, this would depend on when optim
believes it's good enough and stops optimizing. In particular if you stretch
out x, then it is plausible that the likelihood function will become flat
Let's make it a data frame instead:
# Read the data from your post into a data frame named d:
d <- read.table(textConnection("
0.98 2
0.2 1
0.01 2
0.5 1
0.6 6"))
closeAllConnections()
# Use the ave() function and append the result to d:
d$sumprob <- with(d, ave(V1, V2, FUN = sum))
Dear R People:
Hope you're having a great weekend!
Anyhow, I'm currently experimenting with R Studio on a web server,
which is the best thing since sliced bread, Coca Cola, etc.
My one question: there is a way to show plots. is there a way to
show Rcmdr or its Plugins, please? I tried, but it
See comments in-line:
On 01/10/11 23:26, Antoine wrote:
Dear Rolf,
I tryed to follow your advices but the results I am getting seems still
strange to me. See below an example of a matrix:
datamat<- matrix(c(2.2, 0.4, 0.4, 2.8), 2, 2)
plot(ellipse(datamat),type='l')
eigenval<- eigen(datamat)$v
On 02/10/11 14:06, Jim Silverton wrote:
Hi all,
I have 2 columns in a mtrix, one of which is a column of probabilities and
the other is simply a vector of integers. I want to sum all the
probabilities with the same integer value and put it in a new column.
For example,
If my matrix is:
0.98 2
Any good news Arne?
*Felipe Nunes*
CAPES/Fulbright Fellow
PhD Student Political Science - UCLA
Web: felipenunes.bol.ucla.edu
On Thu, Sep 29, 2011 at 5:10 AM, Arne Henningsen <
arne.henning...@googlemail.com> wrote:
> Hi Felipe
>
> On 25 September 2011 00:16, Felipe Nunes wrote:
> > Hi Arne,
>
Dear R People:
This is probably a very simple question. I know that if I want to get
a list of the classes of the objects in the workspace, I can do this:
> sapply(ls(), function(x)class(get(x)))
aa1.dfbd
"list" "data.frame""integer""numer
Hi Erin,
Try this: names(which(sapply(.GlobalEnv, is.data.frame)))
Cheers,
Josh
On Sat, Oct 1, 2011 at 8:37 PM, Erin Hodgess wrote:
> Dear R People:
>
> This is probably a very simple question. I know that if I want to get
> a list of the classes of the objects in the workspace, I can do this
Hi,
I want to fit 3 beta distributions to my data which ranges between 0 and 1.
What are the functions that I can easily call and specify that 3 beta
distributions should be fitted?
I have already looked at normalmixEM and fitdistr but they dont seem to be
applicable (normalmixEM is only for fittin
I'm getting this error on the attached code and breaking my head but can't
figure it out. Any help is much appreciated. Thanks, Vince
CODE:
library(deSolve)
Res_DAE=function(t, y, dy, pars) {
with(as.list(c(y, dy, pars)), {
res1 = -dS -dES-k2*ES
res2 = -dP + k2*ES
eq1 = Eo-E -ES
I am trying to replicate the script, appended below. My data is in OOCalc
files. The script (below) synthesizes a dataset (it serves as a
"tutorial"), but I will need to get my data from OOCalc into R for use in
that script (which uses arrays).
I've worked my way through the script, and understa
Thank you for your response!
But the problem is when I estimate a model without knowing the true
coefficients, how can I know which "reltol" is good enough? "1e-8" or
"1e-10"? Why can commercial packages automatically determine the right
"reltol" but R cannot?
--
View this message in context:
h
What I tried is just a simple binary probit model. Create a random data and
use "optim" to maximize the log-likelihood function to estimate the
coefficients. (e.g. u = 0.1+0.2*x + e, e is standard normal. And y = (u >
0), y indicating a binary choice variable)
If I estimate coefficient of "x
Can anyone show how to calculate a multivariate Laplace density? Thanks.
--
View this message in context:
http://r.789695.n4.nabble.com/Multivariate-Laplace-density-tp3864072p3864072.html
Sent from the R help mailing list archive at Nabble.com.
__
R-he
Oh, I think I got it. Commercial packages limit the number of decimals
shown.
--
View this message in context:
http://r.789695.n4.nabble.com/Poor-performance-of-Optim-tp3862229p3864271.html
Sent from the R help mailing list archive at Nabble.com.
_
47 matches
Mail list logo