On Apr 21, 2011, at 11:30 , Jeffrey Pollock wrote:
> So am I right in saying that Binary data isnt the only case where this is
> true? It would make sense to me that for a multinomial model you could have a
> unique factor for each data point and thus be able to create a likelihood of
> 1.
Ye
algorithm until the coefficients where either 'Inf' or '-Inf'.
Please let me know your thoughts on this.
Thanks again,
Jeff
-Original Message-
From: peter dalgaard [mailto:pda...@gmail.com]
Sent: 21 April 2011 09:32
To: Juliet Hannah
Cc: Jeffrey Pollock; r-help@r-p
On Apr 21, 2011, at 05:14 , Juliet Hannah wrote:
> As you mentioned, the deviance does not always reduce to:
>
> D = -2(loglikelihood(model))
>
> It does for ungrouped data, such as for binary logistic regression.
To be precise, it only happens when the log likelihood of the saturated model
i
As you mentioned, the deviance does not always reduce to:
D = -2(loglikelihood(model))
It does for ungrouped data, such as for binary logistic regression. So
let's stick with the original definition.
In this case, we need the log-likelihood for the saturated model.
x = rnorm(10)
y = rpois(10,l
It has always been my understanding that deviance for GLMs is defined
by;
D = -2(loglikelihood(model) - loglikelihood(saturated model))
and this can be calculated by (or at least usually is);
D = -2(loglikelihood(model))
As is done so in the code for 'polr' by Brian Ripley (in the
5 matches
Mail list logo