Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: Interrater reliability for 2 raters with multiple codes per subject
From
Bert Jung <[email protected]>
To
[email protected]
Subject
st: Interrater reliability for 2 raters with multiple codes per subject
Date
Wed, 12 Feb 2014 15:09:01 -0500
Dear Statalisters,
I am struggling to construct inter-rater reliability. I have two
raters, each of whom can provide up to 3 codes per subject. The codes
are categorical and range from 1 to 8. The data look like this:
clear
set obs 2
input subject raterA_code1 raterA_code2 raterA_code3 raterB_code1
raterB_code2 raterB_code3
1 4 7 . 3 4 7
2 3 . . 3 . .
list
In this example, for subject 1, raters A and B agree on the codes 4
and 7. Rater A only has two codes while B also used a third code, 3.
It seems that to calculate kappa one ought to acknowledge that the
codes are grouped within the two raters.
I thought this kind of setup would be common but could not find any
guidance. Any thoughts warmly welcome.
Thanks in advance,
Bert
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/