Hi all,
I would like to fit a gamm model of the form:
Y~X+X*f(z)
Where f is the smooth function and
With random effects on X and on the intercept.
So, I try to write it like this:
gam.lme<- gamm(Y~ s(z, by=X) +X, random=list(groups=pdDiag(~1+X)) )
but I get the error messag
Hi all,
I would like to fit a gamm model of the form:
Y~X+X*f(z)
Where f is the smooth function and
With random effects on X and on the intercept.
So, I try to write it like this:
gam.lme<- gamm(Y~ s(z, by=X) +X, random=list(groups=pdDiag(~1+X)) )
but I get the error messag
Hi all!
I would like to estimate confidence intervals for a non lm model.
For example, I use a mixed model of the form:
md=lme(y~x1+I(x1^2)+x2 ...)
Parameters x1+I(x1^2) are fixed effects and I would like to plot the predicted
(partial) curve corresponding to these ones, along with 90% CI
Hi all!
I would like to estimate confidence intervals for a non lm model.
For example, I use a mixed model of the form:
md=lme(y~x1+I(x1^2)+x2 ...)
Parameters x1+I(x1^2) are fixed effects and I would like to plot the predicted
(partial) curve corresponding to these ones, along with 90%
Dear list,
I would like to produce a plot of variables where the size of the points
will be indicative of their standard errors.
How is that possible?
Thank you!
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing li
Dear all,
I am trying to estimate partial residuals for the multiple regression lm
model:
a.lm=lm(y~x1+x2)
I use the function
residuals(a.lm, type="partial")
However, the results are much different when I use the "manual" method
to get partial residuals for x2 (or for x1):
Dear all,
I have an annual time-series of population numbers and I would like to
estimate the auto-correlation. Can I use acf() function and judge
whether auto-correlation is significant by the plots? The acf array, eg:
Autocorrelations of series 'x$log.s.r', by lag
0 1
Dear all,
I have an annual time-series of population numbers and I would like to
estimate the auto-correlation. Can I use acf() function and judge
whether auto-correlation is significant by the plots? The acf array
produced by this functions gives the auto-correlation at lags 1, 2
Is that
Hi all!
Is it possible to model a multiple regression in which the response becomes
zero when one of the two covariates is zero?
lm(y~ x1+x2) and y=0 if x1=0.
However, when x1=0, y=x2+1(intercept).
Does this mean I cannot have a second covariate and intercept or should I
eliminate only the
Hi all!
this is a rather statistical question:
is it meaningful to consider an interaction effect between 2 continuous
covariates?
for example: lm(y~x1+x2+x1:x2)
Should one of continuous x1, x2 be "transformed" to a categorical variable,
i.e. be classified into groups?
Is it easier to interpre
Hi all!
I am fitting a (mixed) model with a factor (F) and continuous response and
predictor:
y~F+F:x
(How) can I check the significance of the model at each factor level (i.e. the
model could be significant only at one of the levels)?
Thank you!
___
Hi all!
(How) is it possible to fit a mixed model with group specific auto-correlation
structure ? For instance, not all my groups display auto-correlation so I would
like to use a corMatrix (corAR1) only for the auto-correlated ones. If I
construct manually a
the corMatrix, is it possible to
Hi all!
I try to estimate a statistic of the form: (x1-x2)/(y1-y2), where
x1,x2,y1,y2 represent variable means, so each has an estimate and
standard error associated with it.
How is it possible to estimate the mean and the variance of this ratio?
Thank you!
[[alternative HTML v
Hi all!
I am trying to run a regression where the predictor values are not real
data but each is estimated from a different model. So, for each value I
have a mean and variance.
Which package/function should I use in this case?
Thank you!
Irene
[[alternative HTML version d
Hi all!
This is a rather statistical question;
Which effect sizes (parametric or not) could I use in order to estimate
the amount of non-linear correlation between 2 variables?
Is it possible to correct for auto-correlation within the correlated
times series?
Any suggestions for the ap
g, so this
has to be taken into account in the model formulation.
What options do I have?
Also, how is it possible to fit a partially linear model?
Thank you!!
Irene Mantzouni
PhD student
DIFRES
__
R-help@r-project.org mailing list
Dear all,
I am trying to define a selfStart function for a non-linear model, which is a
log-transformed SSmicmen model with multiplicative errors and so it is required
to make them additive:
log(y)=log(a)+log(x)-log(1+x/b)
Any ideas about how to use the "peeling" method to derive the "ini
Dear all,
I have a for loop which includes nls model estimation.
The loop breaks after the first non-convergence error.
How can I make the loop continue and try to estimate all models?
I suppose it should be sth like: if(...) { next }
but I have no idea how to setup the arguements...
Thank yo
Thank you all!
Yes, try works!
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible cod
Hi all!
I am trying to fit a 2-level hierarchical lme().
I can extract ranef() and coef() without problems for levels=1:2.
However, when it comes to resid() at level=2, the resulting list has some
unnamed entries (label="NA"). I checked with my data (NAs have been omitted)
and I found out tha
Dear all,
I am trying to fit a lme of the form:
m=lme(y~F*x-1, random=...,data=...)
F is a 2-level factor since I need the fixed part to differ for 2 groups.
The random part depends on subgroups G.
I would like to have a random effect only on the slope and I cannot figure out
how to formulat
Hi all!
I would like to extract the residual standard error by group in a lme() model...
Is there a direct method?
Also, a rather statistical question;
I need to estimate the standard error of the mean(residuals) in a model..is
this the same as the residual s.e.?
Thank you!!
Irene
_
Hi all!
How is it possible to estimate standard errors for coef obtained from lme?
Is there sth like se.coef() for lmer or what is the anaytical solution?
Thank you!
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
P
Let's say that I can estimate se for ranef by group
and the se for fixef (modifying the se.fixef and se.ranef functions of "arm"
package for lmer).
Since, I need the se for coef=ranef+fixef
can I estimate it based on the SEs of ranef and fixef?
Thank you!
Irene
__
Dear all
I am trying to fit a mixed model with a factor and a random effect on a slope:
y~F*x+...,random=~x
where F is a factor with 2 levels and x the covariate.
the random effects for the 2 levels of F should be equal so I am fitting the
model like:
ex.lme=lme(y~x+F+z*x+F*x-1,
random=list(g
Dear all,
I have a mixed model of the form:
y[it]=a+b*z[i]+ai+(c+c[i])*x[it] ("i" refers to group and "t" to observation in
group).
zi are group-specific variables available only for a number of groups
and "ai", "ci" denote the random effects on the intercept and slope
respectively.
Is the
parameter that is linear combination of these quantities?
All the best,
Irene
Από: [EMAIL PROTECTED] εκ μέρους Douglas Bates
Αποστολή: Τετ 17/10/2007 10:04 μμ
Προς: Doran, Harold
Κοιν.: Irene Mantzouni; [EMAIL PROTECTED]; R-SIG-Mixed-Models
Θέμα: Re: [R] coef se
I would like to fit a 2-level mixed model:
yit=a+a[i]+a[it] +(b+b[i]+b[it])*xit+eps[it]
However, the variance of the second level components should depend on the
group, i.e. sigma for a[it] and b[it] should be [i] specific.
I do not know whether this is conceptually right in the mixed model
Is there a formal way to prove the need of a mixed model, apart from e.g.
comparing the intervals estimated by lmList fit?
For example, should I compare (with AIC ML?) a model with seperately (unpooled)
estimated fixed slopes (i.e.using an index for each group) with a model that
treats this par
Dear all,
probably this is quite clear for most of you but for me it is a headache...
I am regressing response A against the continuous covariate B and the
relationship is clearly quadratic.
When I add a second covariate B, the relationship becomes linear for both B and
C.
So, I expect that
Hi!
Yes, I think that you understood it right and made it clear enough to me too!
thank you! :)
Από: Daniel Malter [mailto:[EMAIL PROTECTED]
Αποστολή: Κυρ 11/11/2007 10:55 μμ
Προς: 'Rolf Turner'; Irene Mantzouni
Κοιν.: [EMAIL PROTECTED]
Θέ
31 matches
Mail list logo