This is a **highly technical** statistical issue, not an R-Help topic.
I strongly suggest that you post to the R-sig-mixed-models list
instead.
Cheers,
Bert
Bert Gunter
"Data is not information. Information is not knowledge. And knowledge
is certainly not wisdom."
-- Clifford Stoll
On Tue, J
I am trying to fit data from 23 subjects using random effects
regression, and am comparing the results of lme and lmer. The point
estimates and the SEs are the same in both models, however the degrees
of freedom are widely different. lme reports 88 DF, lmer approximately
22. Can someone help me und
Dear All
I've fitted a GAMM to relate water temperature to the day of the year (DOY) at
three different sites. I used summary(MFinal$gam) and anova(MFinal$gam) to
produce the output of my model. I'm confused on how to report the degrees of
freedom for the smother and the factor. I currently ha
essage-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Ben Bolker
Sent: 23. januar 2014 23:35
To: r-h...@stat.math.ethz.ch
Subject: Re: [R] degrees of freedom
Iain Gallagher btopenworld.com> writes:
>
> Hello List
>
> I have been asked to an
Iain Gallagher btopenworld.com> writes:
>
> Hello List
>
> I have been asked to analyse some data for a colleague.
> The design consists of a two sets of animals
>
> First set of three - one leg is treated and the
> other is not under two different conditions (control &
> overload are the sa
Hello List
I have been asked to analyse some data for a colleague. The design consists of
a two sets of animals
First set of three - one leg is treated and the other is not under two
different conditions (control & overload are the same animals - control leg is
control (!) for treated leg;
Hi,
I would like to understand why the residual standard error, and the degrees
of freedom are changing when I define custom contrasts, which are not
orthogonal.
For example:
y <- rnorm(40)
x <- factor(rep(1:10,4))
summary(lm(y~x)) #standard model: Residual standard error: 1.103 on 30
degrees of
Hi, I need some help to figure out the df I should use in t test for
my contrast.
I have 5 treatments and 5 phenotypes, I would like to compute the
difference of treatment means for each phenotype and do t test, such
as treatment1 vs treatment2 on phenotype1
How should I calculate the pooled degree
Hi, I need some help to figure out the df I should use in t test for
my contrast.
I have 5 treatments and 5 phenotypes, I would like to compute the
difference of treatment means for each phenotype and do t test, such
as treatment1 vs treatment2 on phenotype1
How should I calculate the pooled degree
thanks.
well, basically. i ran this analysis once before, with just general linear
model, but nothing stuck after multiple comparisons.
the question is: do reading scores predict volume change over time (in
canonical "reading regions").
so i tried again, using linear mixed effects, adding subje
SHouston gmail.com> writes:
> I am trying to run a linear mixed effect model on data. I have 17
> longitudinal subjects and 36 single subjects, and this is the code I'm using
> (below). So, INDEX1 is the column with brain volumns, and the predictors
> are gort and age, by time ID (time they wer
Hi,
I am trying to run a linear mixed effect model on data. I have 17
longitudinal subjects and 36 single subjects, and this is the code I'm using
(below). So, INDEX1 is the column with brain volumns, and the predictors
are gort and age, by time ID (time they were seen).
I believe my data is
I think now everything should be fine and the problem should disappear.
And now about my problem. 'x' is not a set of residuals from an ARMA fit.
I just have 982 weekly quotations of a given stock index and I want to run a
Ljung-Box test on these data to test for autocorrelation. So 'x' would be
ex
Please fix your email settings: your 'From:' field is not in the
correct encoding, so I had to manually copy the ASCII part. (The
header as received here said it was UTF-8, but it is not valid UTF-8.
Most likely no encoding was declared your end.)
On Sat, 27 Aug 2011, Marcin Pciennik wrote
Dear list members,
I have 982 quotations of a given stock index and I want to run a Ljung-Box
test on these data to test for autocorrelation. Later on I will estimate 8
coefficients.
I do not know how many degrees of freedom should I assume in the formula for
Ljung-Box test. Could anyone tell me p
Thank you guys!
--
View this message in context:
http://r.789695.n4.nabble.com/degrees-of-freedom-does-not-appear-in-the-summary-lmer-tp3741327p3742170.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
h
Hi:
This is worth reading and bookmarking:
http://glmm.wikidot.com/faq
HTH,
Dennis
On Sat, Aug 13, 2011 at 6:31 AM, xy wrote:
> Hi ,
>
> Could someone pls help me about this topic, I dont know how can i extract
> them from my model!!
>
> Thanks,
>
> Sophie
>
> --
> View this message in contex
Hi Sophie,
It is not clear what the degrees of freedom should be in an lmer
model, so their not appearing is intentional. There is fairly
extensive discussion of this topic in the archives for the R-sig-mixed
list.
See, for example: http://rwiki.sciviews.org/doku.php?id=guides:lmer-tests
Cheers
Hi ,
Could someone pls help me about this topic, I dont know how can i extract
them from my model!!
Thanks,
Sophie
--
View this message in context:
http://r.789695.n4.nabble.com/degrees-of-freedom-does-not-appear-in-the-summary-lmer-tp3741327p3741327.html
Sent from the R help mailing list arc
On Mar 28, 2011, at 16:53 , Ben Bolker wrote:
> Rubén Roa azti.es> writes:
>
>>
>>
>> However, shouldn't _free parameters_ only be counted for degrees of
>> freedom and for calculation of AIC?
>> The sigma parameter is profiled out in a least-squares
>> linear regression, so it's not free,
Rubén Roa azti.es> writes:
>
>
> However, shouldn't _free parameters_ only be counted for degrees of
> freedom and for calculation of AIC?
> The sigma parameter is profiled out in a least-squares
> linear regression, so it's not free, it's not a
> dimension of the likelihood.
> Just wondering
.org] En nombre de Frank Harrell
> Enviado el: lunes, 28 de marzo de 2011 15:44
> Para: r-help@r-project.org
> Asunto: Re: [R] Degrees of freedom for lm in logLik and AIC
>
> Thank you Peter. I didn't realize that was the convention used.
> Frank
>
>
> Peter Dalg
Thank you Peter. I didn't realize that was the convention used.
Frank
Peter Dalgaard-2 wrote:
>
> On Mar 28, 2011, at 05:36 , Frank Harrell wrote:
>
> > I have a question about the computation of the degrees of freedom in
> a linear
> > model:
> >
> > x <- runif(20); y <- runif(20)
> > f <- l
On Mar 28, 2011, at 05:36 , Frank Harrell wrote:
> I have a question about the computation of the degrees of freedom in a linear
> model:
>
> x <- runif(20); y <- runif(20)
> f <- lm(y ~ x)
> logLik(f)
> 'log Lik.' -1.968056 (df=3)
>
> The 3 is coming from f$rank + 1. Shouldn't it be f$rank?
I have a question about the computation of the degrees of freedom in a linear
model:
x <- runif(20); y <- runif(20)
f <- lm(y ~ x)
logLik(f)
'log Lik.' -1.968056 (df=3)
The 3 is coming from f$rank + 1. Shouldn't it be f$rank? This affects
AIC(f).
Thanks
Frank
-
Frank Harrell
Department of
Hi:
Look at the links in the following blog entry:
http://blog.lib.umn.edu/moor0554/canoemoore/2010/09/lmer_p-values_lrt.html
and this discussion, found on the R wiki:
http://rwiki.sciviews.org/doku.php?id=guides:lmer-tests
Also see Ben Bolker's GLMM wiki page, which discusses many of the unreso
Hello,
I have a little problem about degree of freedom in R.
if you can help me, I will be happy.
I used nlme function to analyze my data and run the linear mixed
effects model in R.
I did the linear mixed effect analysis in SAS and SPSS as well.
However, R gave the different degrees of freedom t
I am trying to test for fixed factor main effects in an unbalanced
mixed effects model but when I fit the reduced model for "mic" factor
effects, the extra degrees of freedom are being allocated to a nested
term rather than the residuals. The model has inc, mic and spp are
independent variab
##I am trying to test for fixed factor main effects in an unbalanced mixed
effects model but when I fit the reduced model for "mic" factor effects, the
extra degrees of freedom are being allocated to a nested term rather than the
residuals. The model has inc, mic and spp are independent varia
On Sep 14, 2009, at 6:02 AM, Breach, Katherine wrote:
Hi
When I run a Chi squared test in R I am automatically given the chi
squared value and the degrees of freedom. How do I find these values
when i've used Fisher's exact test?
The function fisher.test uses hypergeometric distributio
Hi
When I run a Chi squared test in R I am automatically given the chi squared
value and the degrees of freedom. How do I find these values when i've used
Fisher's exact test?
Cheers,
Katie
[[alternative HTML version deleted]]
__
R-help@r-p
You don't have 168 observations - 2 of them have no data (Freq = 0).
On Thu, 10 Apr 2008, Giovanni Petris wrote:
Hello,
I am looking at the job satisfaction data below, from a problem in
Agresti's book, and I am not sure where the degrees of freedom come
from. The way I am fitting a binomial
Hello,
I am looking at the job satisfaction data below, from a problem in
Agresti's book, and I am not sure where the degrees of freedom come
from. The way I am fitting a binomial model, I have 168 observations,
so in my understanding that should also be the number of fitted
parameters in the sat
Try this:
attr(logLik(lmx), "df")
On 04/03/2008, Davood Tofighi <[EMAIL PROTECTED]> wrote:
> Hello,
>
> II used the logLik() function to get the log-likelihood estimate of an
> object. The function also prints the degrees of freedom. How can I extract
> the degrees of freedom and assign it to
Hello,
II used the logLik() function to get the log-likelihood estimate of an
object. The function also prints the degrees of freedom. How can I extract
the degrees of freedom and assign it to a variable.
Below is the output:
> logLik(fit2pl)
'log Lik.' -4842.912 (df=36)
Thanks,
Davood Tofighi
I suggest this discussion be moved to the R-SIG-mixed-models mailing
list which I am cc:ing on this reply. Please delete the R-help
mailing list from replies to this message.
On Jan 16, 2008 11:44 AM, Feldman, Tracy <[EMAIL PROTECTED]> wrote:
> Dear All,
>
>
>
> I used lmer for data with non-norm
Dear All,
I used lmer for data with non-normally distributed error and both fixed
and random effects. I tried to calculate a "Type III" sums of squares
result, by I conducting likelihood ratio tests of the full model against
a model reduced by one variable at a time (for each variable
separate
37 matches
Mail list logo