Hello,

I would like to know if there is an R utility for computing some
measure of inter-rater reliability/agreement for 3 raters (columns),
where each rating is a binary assessment. Further, the three raters
are unique (the same rater's assessment in any single column) so that
ideally the statistic would account for this fact.

I see there was a package "concord" that had a variety of utilities
for computing kappa stats but looks to me that it is no longer
available.

______________________________________________
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to