Frank E Harrell Jr wrote:
Gad Abraham wrote:
This approach leaves much to be desired. I hope that its practitioners start gauging it by the mean squared error of predicted probabilities.

Is the logic here is that low MSE of predicted probabilities equals a better calibrated model? What about discrimination? Perfect calibration

Almost. I was addressed more the wish for the use of strategies that maximize precision while keeping bias to a minimim.

implies perfect discrimination, but I often find that you can have two

That doesn't follow. You can have perfect calibration in the large with no discrimination.

I'm not sure I understand: if you have perfect calibration, so that you correctly assign the probability Pr(y=1|x) to each x, doesn't it follow that the x will also be ranked in correct order of probability, which is what the AUC is measuring?


competing models, the first with higher discrimination (AUC) and worse calibration, and the the second the other way round. Which one is the better model?

I judge models on the basis of both discrimination (best measured with log likelihood measures, 2nd best AUC) and calibration. It's a two-dimensional issue and we don't always know how to weigh the two. For many purposes calibration is a must. In those we don't look at discrimination until calibration-in-the-small is verified at high resolution.

By "log likelihood measures" do you mean likelihood-ratio tests?

--
Gad Abraham
Dept. CSSE and NICTA
The University of Melbourne
Parkville 3010, Victoria, Australia
email: [EMAIL PROTECTED]
web: http://www.csse.unimelb.edu.au/~gabraham

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to