I am comparing the inter-rater reliability between two raters who have coded results and I now have either 2x2 (+/-; a b /c d) tables or 3x3 (+/ - & unknown) for each variable coded.
I've calculated the kappa and it's confidence interval (i'm familar with kappa and not so much with Finn's r). In other papers, I have noted that some authors present kappa, sensitivity, specificity (assuming one rater as the gold standard), and specifc agreement (e.g. positive specific agreement = 2a/(a+b+c)) since kappa is sensitive to prevalence if positive is very low (where a<<d) - presenting all of these results allows the reader to determine the reliability of the coders quality and consistency from various statisitical viewpoints.
Q. In STATA v9.2, how do i calculate positive and negative specific agreement for the 2x2 and 3x3 tables?
Thanks in advance, Ann Montgomery in Toronto
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/