Hi,

I understand that dichotimization of the predicted probabilities after
logistic regression is philosophically questionable, throwing out
information, etc.

But I want to do it anyway.  I'd like to include as a measure of fit %
of observations correctly classified because it's measured in units
that non-statisticians can understand more easily  than area under the
ROC curve, Dxy, etc.

Am I right that there is an optimal Y>=q probability cutoff, at which
the True Positive Rate is high and the False Positive Rate is low?
Visually, it would be the elbow in the ROC curve, right?
My reasoning is that even if you had a near-perfect model, you could
set a stupidly low (high) cutoff and have a higher false positive
(negative) rate than would be optimal.

I know the standard default or starting point is Y>=.5, but if my
above reasoning is correct, there ought to be an optimal cutoff for a
given model.  Is there an easy way to determine that cutoff in R
without writing my own script to iterate through possible breakpoints
and calculating classification accuracy at each one?

Thanks in advance.
-Dan

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to