[EMAIL PROTECTED] wrote:
In Julian Faraway's text on pgs 117-119, he gives a very nice, pretty
simple description of how a glm can be thought of as linear model
with non constant variance. I just didn't understand one of his
statements on the top of 118. To quote :
"We can use a similar idea to fit a GLM. Roughly speaking, we want to
regress g(y) on X with weights inversely proportional
to var(g(y). However, g(y) might not make sense in some cases - for
example in the binomial GLM. So we linearize g(y)
as follows: Let eta = g(mu) and mu = E(Y). Now do a one step expanation
, blah, blah, blah.
Could someone explain ( briefly is fine ) what he means by g(y) might
not make sense in some cases - for example in the binomial
GLM ?
I don't know that text, but I'd guess he's talking about the fact that
the expected value of a binomial must lie between 0 and N (or the
expected value of X/N, where
X is binomial from N trials, must lie between 0 and 1).
Similarly, the expected value of a gamma or Poisson must be positive, etc.
Duncan Murdoch
______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.