On 05/19/2010 01:39 PM, Ben Bolker wrote:
Frank E Harrell Jr<f.harrell<at> Vanderbilt.Edu> writes:
Please read the large number of notes in the e-mail archive about the
invalidity of such modeling procedures.
Frank
I'm curious: do you have an objection to multi-model averaging
a la Burnham, Anderson, and White (as implemented in the MuMIn
package)? i.e., *not* just picking the
best model, and *not* trying to interpret statistical significance
of particular coefficients, but trying to maximize predictive
capability by computing the AIC values of many candidate models
and weighting predictions accordingly (and incorporating among-model variation
when computing prediction uncertainty)? (I would look for the
answer in your book, but I have lost my copy by loaning it out
& haven't got a new one yet ...)
Hi Ben,
I think that model averaging (e.g., Bayesian model averaging) works
extremely well. But if you are staying within one model family, it is a
lot more work than the equally excellent penalized maximum likelihood
estimation of a single (big) model. The latter uses more standard tools
and can isolate the effect of one variable and result in ordinary model
graphics.
I haven't seen a variable selection method that works well without
penalization (shrinkage).
Frank
--
Frank E Harrell Jr Professor and Chairman School of Medicine
Department of Biostatistics Vanderbilt University
______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.