Thankyou very much for your help, David. It was most appreciated.
Bernadette
Quoting David Harrison <[email protected]>:
> Bernadette
>
> You can calculate an analytical confidence interval for kappa for
> multiple raters and a binary rating using the method of Zou & Donner
> (Biometrics 2004) - the commands -kappci- and -kappaci- to do this
> calculation (for -kap- and -kappa- formatted data, respectively) are
> included in my package -kaputil- available from SSC (-ssc describe
> kaputil-).
>
> Alternatively, you may want to take a look at the command -kapci-, from
> Stata J 4-4 (-findit kapci-), which does bootstrap cis for kappa,
> although my recollection is that the data need to be in the format for
> -kap- for this one.
>
> You should note, though, that both -kap- and -kappa- require one row per
> subject rather than one row per rater. You can get from one form to the
> other with two -reshape-s, e.g. suppose you have a variable rater taking
> values 1-149 and 14 variables rating1-rating14 for the ratings of the 14
> videos...
>
> reshape long rating, i(rater) j(video)
> reshape wide rating, i(video) j(rater)
>
> Now you have a variable video taking values 1-14 and 149 variables
> rating1-rating149 for the ratings of the 149 raters, and you can use
> -kap- (or -kappci- or -kapci- or do your own bootstrapping).
>
> Hope this helps
>
> David
>
>
> -----Original Message-----
> From: [email protected]
> [mailto:[email protected]] On Behalf Of
> [email protected]
> Sent: 20 June 2006 05:18
> To: [email protected]
> Subject: st: bootstrap confidence intervals for multiple rater kappa
>
> I have a dataset where 149 raters have classified 14 video scenarios
> into a binary (1,0) classification system on two occasions, once before
> and once after an education intervention. I'm interested in seeing
> whether the education intervention improved the agreement of the raters
> in classifying the video scenarios. I can use the "kappa" command on a
> table summarising this dataset to calculate the kappa statistic across
> the 14 scenarios on each occasion. I would now like to calculate
> bootstrapped confidence intervals for these and also compare kappa
> statistics between the two assessment occasions.
> However, I would like the bootstrap procedure to resample raters rather
> than the 14 video scenarios. Can anyone tell me how I can calculate
> kappa in the case of more than two raters using a dataset where each
> rater and their classifications of the video scenarios are presented as
> an individual row in the dataset so that these can be resampled when I
> perform the bootstrap procedure?
> Bernadette Massey
>
> ______________________________________________________________________
> This email has been scanned by the MessageLabs Email Security System.
> For more information please visit http://www.messagelabs.com/email
> ______________________________________________________________________
>
> *
> * For searches and help try:
> * http://www.stata.com/support/faqs/res/findit.html
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
>
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/