&Agr;&rgr;&khgr;&igr;&kgr;&oacgr; &mgr;&eeacgr;&ngr;&ugr;&mgr;&agr; &agr;&pgr;&oacgr; Roger Harbord <[email protected]>:
i tried to save the estimates with "parmest" but it is not possible
with "diagt". The graph options "serrbar" and "twoway rcap" can produce a
graph with the confidence intervals of the estimates but it is not possible to
include the values of sensitivity or specificity.Any advice will be very
helpful.
Thank you a lot!
> Try -serrbar- or -twoway rcap-. However you'd need to first save the
> estimates and CIs as variables. Roger Newson's -parmest- package could be
> one way to do that, after which you could use his -eclplot- package (both
> available on SSC) as an alternative to -serrbar- or -twoway rcap-.
>
> Roger H.
>
> --On 08 September 2005 16:15 +0300 [email protected] wrote:
>
> > &Agr;&rgr;&khgr;&igr;&kgr;&oacgr; &mgr;&eeacgr;&ngr;&ugr;&mgr;&agr; &agr;&pgr;&oacgr; Roger Harbord <[email protected]>:
> >
> > thank you a lot for your help. This is the solution and i have already
> > find it in a related article. I would like to ask you if you know how i
> > will produce an error graph (graph of sensitivities and their confidence
> > interval) for every diagnostic test? Does stata 8 support a graph like
> > this?
> >
> > thank you a lot in advance!!!!!!
> >
> >
> >> As Pepe mentions on p43, you can test the null hypothesis of equal
> >> sensitivity or of equal specificity of two binary tests done on the
> >> same people using McNemar's test (-symmetry- or -mcc- commands in
> >> Stata). I think something like:
> >>
> >> . symmetry test1 test2 if disease==1 /* for sensitivity */
> >> . symmetry test1 test2 if disease==0 /* for specificity */
> >>
> >>
> >> However with 12 tests there are a lot of comparisons (66 for each of
> >> sens & spec) so some allowance for multiple testing does seem a good
> >> idea.
> >>
> >> A Bayesian approach seems quite attractive for this sort of problem as
> >> you can then meaningfully ask "what is the probability that test X has
> >> the highest sensitivity?", which you can't in a frequentist framework.
> >> You'd need to switch to something like WinBUGS to get an answer to that
> >> though.
> >>
> >> If one test has higher sensitivity than another but lower specificity or
> >> vice-versa then which is better also depends on the disbenefits of false
> >> positives compared to false negatives of course.
> >>
> >> Roger.
> >>
> >> --
> >> Roger Harbord [email protected]
> >> MRC Health Services Research Collaboration & Dept. of Social Medicine
> >> University of Bristol http://www.epi.bris.ac.uk/staff/rharbord
> >>
> >> --On 07 September 2005 15:07 -0400 "Michael P. Mueller"
> >> <[email protected]> wrote:
> >>
> >> > You might want to take a look at this book: Pepe, M.S. (2003).
> >> > Statistical Evaluation of Medical Tests for Classification and
> >> > Prediction. Dr. Pepe has Stata programs on her webpage you can
> >> > download. Hope this helps,
> >> > Michael
> >> >
> >> > [email protected] wrote:
> >> >
> >> >> &Agr;&rgr;&khgr;&igr;&kgr;&oacgr; &mgr;&eeacgr;&ngr;&ugr;&mgr;&agr; &agr;&pgr;&oacgr; Svend Juul <[email protected]>:
> >> >>
> >> >>
> >> >>
> >> >>> htzvara (?) wrote:
> >> >>>
> >> >>> i have one variable which represents if the patient has the disease
> >> >>> (coding: 0-
> >> >>> 1)--and this is standard.
> >> >>> Additionally i have 12 more variables which represents the outcome
> >> >>> of 12 different diagnostic procedures (coding: 0-1 for all of
> >> >>> them).I want to find which is the best diagnostic procedure. I
> >> >>> calculate the sensitivity and specificity and their confidence
> >> >>> intervals for each of them. If the confidence
> >> >>> interval for the sensitivity of one diagnostic procedure do not
> >> >>> overlap the confidence interval for the Se of another diagnostic
> >> >>> procedure then the difference is significant.
> >> >>> Is there any test to perform and give p_value? Is there a need to
> >> >>> make a correction for multiple comparisons.?
> >> >>>
> >> >>> ----
> >> >>>
> >> >>> It is not quite clear to me what you want. If it is to find the
> >> >>> single test that has the "best" predictive value, try Paul Seed's
> >> >>> -diagt- (findit diagt). However, you must look at both sensitivity
> >> >>> and specificity to get a meaningful assessment.
> >> >>>
> >> >>> I am not sure why you want to test whether the sensitivity of two
> >> >>> tests are significantly different. And the confidence interval
> >> >>> comparison you describe is quite insensititive.
> >> >>>
> >> >>> Would this show what you need:
> >> >>> Make a logistic regression followed by -lroc- (ROC analysis):
> >> >>> . logistic disease test1-test12
> >> >>> . lroc
> >> >>>
> >> >>> You might then try to remove tests to see whether removal makes a
> >> >>> difference to the AUC (area under curve).
> >> >>>
> >> >>> Hope this helps
> >> >>> Svend
> >> >>>
> >> >>>
> >> >>> Thank you very much for your help.
> >> >>>
> >> >>>
> >> >> I know about diagt and i used it to obtain the sensitivity -
> >> >> specificity. Roc analysis cannot help as the variables which
> >> >> represents the diagnostic tests are not continuous but dichotomous
> >> >> (0-1).Even if i can see, which test has the best se-sp i want to
> >> >> perform a test to prove it.
> >> >>
> >> >> thank you again.
>
>
> *
> * For searches and help try:
> * http://www.stata.com/support/faqs/res/findit.html
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
>
--
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/