Theron's description is vague and confusing. I think he has a 2 x 2
(a b \ c d) table of rater/test agreements (+/-) Conditional kappa
positive for the would compute kappa that compares observed and
expected agreements in the "a" cell alone
Theron's statement that the drug test has only one rating (true
positive) contradicts what he elsewhere says about the data: that
there were a large number of negatives. I think that he means that
the tests are all positive by some gold standard. But he might mean
that it was rated positive by one rater or another. Or he just might
means that he is interested only in the positive agreements. Now in
principle, either rater (test?) could give the drug a negative rating,
unless one of them *is* the gold standard. If that is the case, Ronan
is correct: Kappa is inappropriate and Theron should just give the
proportion of the time that the second rater is right.
Apparently, in Theron's table, only one rater declared that any
samples were negative. If so, conditional kappa is zero (observed a =
expected a) and his computation is finished.
None of Stata's commands will compute conditional kappa directly, but
it is possible to estimate it indirectly with -mvreg- followed by
-nlcom-
There are four multinomial cells in the 2 x 2 table. Number them
1,2,3,4, to correspond to the 11, 12, 21, 22 cells, respectively
1. Create a four line data set, with one line for each of the four
table cells, a cell number, and the count of observations in that
cell.
2. Create 0-1 indicator variables for each observation to indicate
the table cell from which it comes, say x1 x2 x3 x4, using -tab,
gen()- or by hand.
3. Create a constant variable k = 1.
3. Run the command
mvreg x1 x2 x3= k [fweight = count]
(If one were interested in the negative conditional kappa, one would
use x2 x3 x4 as predictors).
4. The constants for x1 and x2 x3 are just the sample proportions in
the a, b, and c cells. and can be accessed as [x1]_cons [x2]_cons
[x3]_cons
6. Positive Conditional kappa is a function of these three
proportions. Use -nlcom- to estimate it.
7. To get valid standard errors, -jackknife- or -bootstrap- -mvreg-
and follow by -nlcom-
-Steve
On Wed, Nov 4, 2009 at 10:52 AM, Jackson, Theron Keith (UMSL-Student)
<[email protected]> wrote:
> The drug test has only one rating (True Positives). Due to the large majority of the sample being (True Negatives) the Kappa will be biased (Bishop et al., 1975; Magura et al, 1987; Magura et al, 1996; Sherman et al., 1992; Patton, et al., 2005).
>
> "A more appropriate indicator of agreement between self-reports and urinalysis for most of the studies reviewed is a conditional kappa coefficient...The self-reports of subjects with negative urinalyses are ignored in the computation of conditional Kappa." (Magura et al., 1987 p. 734).
>
> It appears to me that there is no direct way to compute conditional kappa; however, I should be able to weight kappa so that those who test positive are given all the weight of the sample.
>
Ronan Conroy wrote:
>
> Are you telling us that you have only three participants? Then the
> advice is to get more.
>
> Or are you telling us that the drug test has only one rating
> (positive) because it is correct? In this case, your whole sample is
> made up of true positives. In a case like this, kappa is
> inappropriate. The level of agreement reduces to the proportion of
> self-reports (rating 2) which are positive.
>
>
Steven Samuels
[email protected]
18 Cantine's Island
Saugerties NY 12477
USA
845-246-0774
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/