Dear R-help,
In the rms package, when using the ols function with a penalty, the
df.residual appears to always be n-1 (with n being the sample size).
That seems strange to me, but I don't have much knowledge in this
area.
Here's an example:
library(rms)
set.seed(1)
n <- 50
d <- data.frame(x1
Thanks for your reply Mehmet. I've found that the problem was that I
didn't scale the lambda value. My original example did not follow the
instruction not to give a single lambda value, but that in itself
wasn't the problem. Example shown below.
library(glmnet)
library(MASS)
set.seed(1)
n <- 20
Dear R-help,
I'm having trouble understanding how glmnet converts its coefficient
estimates back to the original scale. Here's an example with ridge
regression:
library(glmnet)
set.seed(1)
n <- 20 # sample size
d <- data.
nyone can give.
Mark Seeto
#
library(rms)
set.seed(1)
n <- 100 # sample size
beta0 <- 3.7
beta1 <- 1.5
beta2 <- 0.9
beta3 <- 0.5
rate.x1 <- 2
mean.x2 <- 1
sd.x2 <- 2
nu <- 1.3
d <- dat
% fa2$Structure)[1:3, ] # not the same
as fa2$scores[1:3, ]
###
Have I misunderstood something?
Thanks,
Mark Seeto
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE d
ation=corARMA(p=2), method="ML")
anova(gls.intercept, gls.rcs4)
Thanks,
Mark Seeto
National Acoustic Laboratories, Australia
> Appears to be a definite bug, probably caused by having more than one
> correlation parameter. I hope to have this fixed within 3 days.
> Frank
&
represent a variable
with a spline using rcs. I'm using version 3-5.0 of rms in R 2.15.0.
Thanks for any help you can give.
Mark Seeto
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting gu
also happens with ggplot2 plots. I'm using RGui on Windows 7. It did
not happen with R 2.13.1.
It's not a major problem, because the plots still appear to be
produced correctly, but if anyone can tell me how to fix it, I'd
appreciate it.
Thanks,
Mark Seeto
__
5 y
1 00 0
2 11 1
names(d)[2] <- "a.-5"
d
x a.-5 y
1 00 0
2 11 1
Why does the "a.-5" column name change to "a..5" when another column is added?
Thanks,
Mark Seeto
__
R-help@r-project.org mailing list
h
to the mailing list. If I reply to the list, there's an option
to email the post to someone, but I don't know the person's email address.
Thanks,
Mark Seeto
--
View this message in context:
http://r.789695.n4.nabble.com/Replying-on-Nabble-tp3691214p3691214.html
Sent from the
Mark Seeto wrote:
>
>
> garciap wrote:
>>
>> Hi to all the people,
>>
>> I'm having a trouble when trying to plot a quadratic function. I have the
>> code:
>>
>> regression<-nls(Survival~beta1+beta2*PI+beta3*PI^2, data=cubs,
>>
garciap wrote:
>
> Hi to all the people,
>
> I'm having a trouble when trying to plot a quadratic function. I have the
> code:
>
> regression<-nls(Survival~beta1+beta2*PI+beta3*PI^2, data=cubs,
> start=list(beta1 = 1, beta2 = 1, beta3 = 1))
> plot(Survival~PI,data=cubs, ylab="Survival", xlab="P
forget to run dev.off(), as in that example.
>
> But we don't have the
>
> commented, minimal, self-contained, reproducible code.
>
> the posting guide and the footer of every R message asks for.
>
> On Sun, 12 Jun 2011, Mark Seeto wrote:
>
>>
>> Raptorist
Raptorista wrote:
>
> Now, the graph that appears is very nice: indeed it has a title, two
> axes with their labels and all the rest;
> but when I give commands
>
> postscript(file="plot.eps", onefile=FALSE)
> qqnorm (col)
>
> to save the graph to a file "plot.eps" to include it in a TeX, the f
Thanks for your reply, Frank. I've noticed that the x.knots object doesn't
actually have to be the vector of knots. Just having x.knots <- 0 or even
x.knots <- "a" will allow predict to work.
Mark Seeto
Frank Harrell wrote:
>
> This is a consequence of pred
is simply defined as a vector like
c(-1, 0, 1) (i.e. not using quantile). Is this the intended behaviour?
The requirement that x.knots be in the workspace seems strange, given
that the knot locations are stored in ols1$Design$parms.
Thanks for any help you can give.
Mark Seeto
National Acoustic L
want the restricted interaction operator: y
~ rcs(x1, 3) + rcs(x2, 3) + rcs(x1, 3) %ia% rcs(x2, 3).
For the second example use pol(x,2) or something like pol(x1,2) +
pol(x2,2) + pol(x1, 2) %ia% pol(x2, 2)
If you have to create new variables for R formulas you're usually
doing something wrong
term
Is there a way to do these things without first creating new variables
in the data frame?
Thanks,
Mark Seeto
National Acoustic Laboratories
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posti
7;t see where to
go from there.
Thanks,
Mark Seeto
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Amit Patel-7 wrote:
>
> Hi
> I have used the amelia command from the Amelia R package. this gives me a
> number
> of imputed datasets.
>
> This may be a silly question, but i am not a statistician, but I am not
> sure how
> to combine these results to obtain the imputed dataset to usse for f
Try excluding the first column.
cor(gse20437[, 2:4])
chintan85 wrote:
>
>
> Tab delimited file looks like this
>
> Id v1 v2v3
> df 56 9045
> gh 87 9878
> ty 897867
>
> I used this code
>
>
> [code]
>
> gse20437 <- read.csv("C:/Users//Desktop/data/GSE20437_matrix
n.samples = 1000, df = 3, boot.reps = 1000)
$t
[1] 0.027 0.014 0.959
$bca
[1] 0.054 0.047 0.899
I don't understand the warning message, but for these examples, the
ordinary t interval appears to be better than the bootstrap BCA
interval. I would really appreciate any recommendations anyone
same?
Thanks in advance; I really appreciate any help you can give.
Mark
Frank Harrell wrote:
On Mon, 9 Aug 2010, Mark Seeto wrote:
Hello, I have a general question about combining imputations as well
as a
question specific to the rms and Hmisc packages.
The situation is multiple regressio
Thank you for your reply Frank. I am not familiar with the contrast
test, but I'll see what I can find out about it.
Mark
Frank Harrell wrote:
On Mon, 9 Aug 2010, Mark Seeto wrote:
Hello, I have a general question about combining imputations as well
as a
question specific to the rm
result in the ERROR line. I would be most
grateful if anyone could explain this to me.
Thanks,
Mark
--
Mark Seeto
Statistician
National Acoustic Laboratories <http://www.nal.gov.au/>
A Division of Australian Hearing
__
R-help@r-project.org mail
ive with this R^2 optimism? It can be decreased by
taking a bigger penalty, but then the corrected R^2 is reduced. Also,
a penalty of 9 gives a corrected slope of about 1.17 (corrected slope
of 1 is achieved with a penalty of about 1 or 2).
Thanks for any help/advice you ca
Bill and Erik, thank you very much for your help. In addition to
solving my problem, both solutions contain other good things I didn't
know about.
Regards,
Mark Seeto
On Thu, Jun 10, 2010 at 2:44 PM, Erik Iverson wrote:
> Hello,
>
>> How does one specify a formula to lm inside
utput will show "formula = y ~ x3 + x4" instead of "formula =
paste..."?
Thanks for any help you can give.
Regards,
Mark
--
Mark Seeto
Statistician
National Acoustic Laboratories
A Division of Australian Hearing
__
R-help@r-project.org mai
> On 06/08/2010 05:29 AM, Mark Seeto wrote:
>>
>>> On 06/06/2010 10:49 PM, Mark Seeto wrote:
>>>> Hello,
>>>>
>>>> I have a couple of questions about the ols function in Frank Harrell's
>>>> rms
>>>> package.
>
> On 06/06/2010 10:49 PM, Mark Seeto wrote:
>> Hello,
>>
>> I have a couple of questions about the ols function in Frank Harrell's
>> rms
>> package.
>>
>> Is there any way to specify variables by their column number in the data
>> f
Error in terms.formula(formula) : '.' in formula and no 'data' argument
Thanks for any help you can give.
Regards,
Mark
--
Mark Seeto
Statistician
National Acoustic Laboratories <http://www.nal.gov.au/>
A Division of Australian Hearing
126 Greville Street
Chatswoo
Thank you very much for your help, Prof. Harrell. I was making the bad
mistake of judging the appearance of the calibration plots without
actually calculating the regression line. I was misjudging slopes of
0.8 or 0.9 as being slopes greater than 1.
Kind regards,
Mark Seeto
> Mark,
>
xing up the axes). I would be most
appreciative if anyone could explain where I'm going wrong.
Thanks for any help you can provide.
Mark Seeto
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Thank you very much Duncan. I see I also had the mistake of a "c"
instead of "expression" in my question.
Mark
On Fri, Jan 22, 2010 at 7:59 AM, Duncan Murdoch wrote:
> On 21/01/2010 3:50 PM, Mark Seeto wrote:
>>
>> Hello,
>>
>> I'm fairly n
Hello,
I'm fairly new to R and I can't work out how to produce a double
inequality like (LaTeX) $0 \leq x \leq 1$ in the legend of a graph. If
I try
> legend(50, 0.1, legend = c(expression(0 <= x <= 1), c(2 <= x <= 3)), pch =
> c(1,1), col = c(2, 3))
then I get an error message "unexpected '<='
35 matches
Mail list logo