Dear All,
just a hopefully interesting paper on this topic: Bland M. The Tyranny of
Power.
It can be downloaded at:
http://www-users.york.ac.uk/~mb55/talks/fullpap.htm.
Kind Regards,
Carlo
-----Messaggio originale-----
Da: [email protected]
[mailto:[email protected]] Per conto di Paul Seed
Inviato: gioved� 18 settembre 2008 14.51
A: [email protected]
Oggetto: st: Re: STATA/detectable difference
The short answer to Elliot's question is "The same important difference
that was used (and justified) in the original power calculation". However,
it might be that there was no power calculation, and he is obliged to fill a
hole.
Even so, it is the actual results (estimate with CI) that matter, not the
post-hoc power calculation.
At the design stage, when seeking funding & ethical approval, it is very
important
to demonstrate that the planned study has a reasonable chance (typically 80%
or 90% power)
to detect an important difference if one exits. Assuming there was a power
calculation,
this should be quoted in the methods section. If not, the oversight should
be admitted.
At the publication stage, you should know what you have and have not shown,
and that is
where the emphasis should be. Give the estimated difference (whether
expressed as odds
ratio, risk ratio etc.), with a confidence interval. If your important
difference is
outside the interval, you have shown "no important difference" or possible
even "equivalence".
One of the main resasons for power calculations in study protocols is that
you are obliged to define
your clinically important differences up front, and cannot fix them to suit
your final results.
--
Paul T Seed MSc CStat, Lecturer in Medical Statistics,
tel (+44) (0) 20 7188 3642, fax (+44) (0) 20 7620 1227
[email protected], [email protected]
King's College London, Division of Reproduction and Endocrinology
St Thomas' Hospital, Westminster Bridge Road, London SE1 7EH
Date: Wed, 17 Sep 2008 10:14:15 -0400
From: Steven Samuels <[email protected]>
Subject: st: Fwd: STATA/ detectable difference
Elliot emailed me privately with the information that a Journal
Reviewer asked for the "minimum detectable" difference. In this case,
Elliot should give a set of plausible values, not just the observed
sample value, for one of his proportions. So, if he observed p0_hat
= 12% and p1_hat = 20%, he might use values p0 = 0.10 0.125 0.15 to
compute minimum detectable risks.
Then he can state. "The minimum relative risks detectable in advance
depended on the true proportion in Group 1. For assumed proportions
of 0.10, 0.125, and 0.15, the corresponding detectable relative risks
were: x, y, and z."
- -Steve
Begin forwarded message:
> > From: Elliott Dasenbrook <[email protected]>
>
> > I have submitted a manuscript and a reviewer requested that we
> > provide "the minimum detectable difference that our study had the
> > power to detect";
> >
>
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/