My original inclination was to use proportion correctly identified, ie,
(true positives + true negatives )/total population
A biostatistician told me that "accuracy" (as definted in my earlier note)
includes prevalence of disease, an I hadn't heard that before. I think the
biostatistician believed a weighted average, or accounting for the
population I have (compared to the population in which the test was
developed, actually not too different from mine), was important.
Originally I wanted to have ROC area (as calculated in -diagt-), but I was
told that with a dichotomous test ROC area wasn't relevant, ie, I didn't
have different cutpoints at which to operate the test. That's when she
mentioned accuracy defined with prevalence.
In fact, ROC area doesn't account for prevalence either...
Thanks for your comments.
Heather
Original Message:
-----------------
From: Ron�n Conroy [email protected]
Date: Thu, 09 Jun 2005 16:42:31 +0100
To: [email protected]
Subject: Re: st: diagt and accuracy
Heather Gold wrote:
> Oops, the first line was cut off.
> I meant to write that diagt defines accuracy as
> (sensitivity+specificity)/2.
>
The help file says
The ROC (Receiver Operating Characteristic curve) area
is (for a simple test) the average of sensitivity and specificity.
- not the same thing. -diagt- doesn't provide accuracy in its output.
> >Has anyone heard of accuracy defined as
> >(prevalence*sensitivity+(1-prevalence)*specificity) ? This is like a
> >weighted average that incorporates prevalence and might be helpful
> with a
> >dichotomous diagnostic test (ie, rather than ROC). If so, is there a
> >standard depending on one's field, eg, medicine?
> >
>
There are two sorts of questions you can ask about a test: ones that
relate to its ability to identify people and ones that relate to the
probability that a result is correct.
Sensitivity/specificity are the proportion of people with/without the
condition who are correctly identified. Positive/negative predictive
values are the proportion of positive/negative tests which are correct.
These latter depend on prevalence, while sensitivity and specificity do not.
It follows that there are two ways of conceptualising overall test
performance - proportion of people correctly identified and proportion
of tests that are correct. In each case, however, this comes out as the
sum of true positives and true negatives over the total N and so is
independent of prevalence.
What did you have in mind that the adjustment would accomplish?
--
Ronan M Conroy ([email protected])
Senior Lecturer in Biostatistics
Royal College of Surgeons
Dublin 2, Ireland
+353 1 402 2431 (fax 2764)
--------------------
Just say no to drug reps
http://www.nofreelunch.org/
----------------------------------------------------------------------------
----------------------------------------
This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom
they are addressed.
If you have received this email in error please notify the
originator of the message. This footer also confirms that this
email message has been scanned for the presence of computer viruses.
Any views expressed in this message are those of the individual
sender, except where the sender specifies and with authority,
states them to be the views of The Royal College Of Surgeons in Ireland.
----------------------------------------------------------------------------
----------------------------------------
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
--------------------------------------------------------------------
mail2web - Check your email from the web at
http://mail2web.com/ .
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/