For the record, I realized that I was bootstrapping the wrong thing.
Here is (a minimally simplified version) of what I meant to do ...
. boot rr_univ=(invlogit($mb_notindicated_predictors)/invlogit($mbneg_predictors)),
/*
*/ reps($reps) saving(multiboot_notneg_$i,replace): /*
*/ logistic outcome zlog zero mbpos int_zlog_pos
. estat bootstrap
(no more red 'x's)
So the question was ... how to explain that these CIs
appear so much better than those generated as follows?
. logistic outcome zlog zero mbpos int_zlog_pos
. predictnl rr=invlogit($mbpos_predictors)/invlogit($mbneg_predictors),se(se_rr)
. gen ub = rr + 1.96*se_rr
. gen lb = rr - 1.96*se
(and is it reasonable to assume that with a whole lot of reps, the
bias-corrected bootstrapped CIs are in fact better?)
Where:
. global mbpos_terms _b[_cons] + _b[int_zlog_pos]*`constant' + /*
*/ _b[int_zero_pos] + _b[zlog]*`constant' + _b[zero] + _b[mbpos]
. global mbneg_terms _b[_cons] + _b[zlog]*`constant' + _b[zero]
Perhaps the question is more clear now (?)
Daniel
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/