Agree with Greg's point. In fact it does not make logical sense in many
cases. Similar to the use of the "statistically unreliable" reliability
measure Cronbach's alpha in some non-statistical fields.
--
View this message in context:
http://r.789695.n4.nabble.com/Power-analysis-tp2524729p2524907
What statistical measure(s) tend to be answering ALL(?) question of practical
interest?
--
View this message in context:
http://r.789695.n4.nabble.com/Re-Question-regarding-significance-of-a-covariate-in-a-coxme-survival-tp2399386p2399577.html
Sent from the R help mailing list archive at Nabbl
My suggestion for Teresa:
If compare model 1 and model 2 with model 0 respectively, the (penalized)
likelihood ratio test is valid.
IF you compare model 2 with model 3, the (penalized) likelihood ratio test
is invalid. You may want to use AIC/SBC to make a subjective decision.
--
View this
The likelihood ratio test is more reliable when one model is nested in the
other. This true for your case.
AIC/SBC are usually used when two models are in a hiearchical structure.
Please also note that any decision made made based on AIC/SBC scores are
very subjective since no sampling distributi
4 matches
Mail list logo