yup, this is why this is a big topic ;-) ... with no easy answer...
conducting and interpreting sparse event meta-analysis is
challenging... especially when you find a difference in the end.
[and on top of that, as Ingram complains, no one uses the angular
(tukey-freeman) transformation ;) ]
t
On Mon, Mar 17, 2008 at 1:54 PM, Marcello Pagano
<[email protected]> wrote:
> Welcome to one of the few areas in statistics where people promote
> throwing data away with impunity. One could say that whether you
> discard experiments where there are no events on either arm, or not,
> depends which side of the argument you are on. Since no events on
> either arm is evidence towards equality of the two arms, then throw the
> data away if you are trying to show a difference :-) Otherwise, why
> throw the data away?
>
> The usual reason proffered for discarding the data is that it is
> embarrassing when looking at odds ratios to have to divide zero by zero.
> But the question you have to ask yourself is why are you looking at odds
> ratios? If you have to, for example if you have a case-control study,
> then you have a problem, otherwise stay clear of odds ratios and you
> won't have a problem if you use appropriate methods for analysis.
>
> m.p.
>
>
>
> On 3/17/2008 1:24 PM, Tom Trikalinos wrote:
> > this is a big topic.
> > see for example:
> >
> > Stat Med. 2007 Jan 15;26(1):53-77. Much ado about nothing: a
> > comparison of the performance of meta-analytical methods with rare
> > events. Bradburn MJ, Deeks JJ, Berlin JA, Russell Localio A.
> >
> > Stat Med. 2004 May 15;23(9):1351-75. What to add to nothing? Use and
> > avoidance of continuity corrections in meta-analysis of sparse data.
> > Sweeting MJ, Sutton AJ, Lambert PC.
> >
> > and quite a few other papers that are out there. The ones discussing
> > the recent rosiglitazone meta-analysis are also relevant. Do a PubMed
> > search if you have not already.
> >
> > My take: Assuming you do not go Bayesian and that you use the typical
> > garden variety meta-analysis methods:
> >
> > 1. Random effects per Der Simonian and Laird are probably a no for
> > main analyses (biased tau^2 in simulation studies).
> > 2. Peto OR seems to do well in terms of bias and coverage
> > probabilities for the CI (!).
> > 3. Mantel-Haenszel (MH) OR seems to do well, I presume the same for RR
> > though i think this is not as clear, or so i remember.
> > 4. M-H RD is reported to give somehow biased estimates and conservative CI
> >
> > 5. If you use multiplicative effect sizes - e.g. an OR I would
> > calculate main analyses without 0% vs 0% studies, then add them in in
> > a sensitivity analysis
> >
> > All the above allowing for the caveat that one an operational
> > knowledge of the relevant methods literature and knowledge of which
> > methods need fudge factors to correct for 0 cells...
> >
> > hope this helps
> >
> > tom
> >
> >
> >
> > On Mon, Mar 17, 2008 at 12:42 PM, Sripal Kumar <[email protected]> wrote:
> >
> >> I was wondering what are your thoughts on meta analysis of trials with
> >> limited number of events. Should studies with no events be censored
> >> from the analysis?
> >>
> >> Any input is highly appreciated.
> >> thanks,
> >> Sripal.
> >>
> *
> * For searches and help try:
> * http://www.stata.com/support/faqs/res/findit.html
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
>
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/