On 2004-11-02, at 11.35, Ron�n Conroy wrote:
Take a step back here. Have you *graphed* your outcome against your
predictor variable?
Thanks for your advice. Yes I have graphed it. And there is a squared
component that kicks in at about 7 on the scale were probabilities starts
to rise dramatically. The graphed probabilities looks fine and are
according to theory.
The problem is the standard errors in the predicted RRR using -nlcom-.
There seem to be a paradoxical relation here: the more extreme the RRR the
LESS significant they are.
The paradox described above can be found in auto.dta also. Consider a
logit model where the probability of a car being foreign is modelled as a
function of length. Length is negatively associated with foreign
(-.0797353). Using -nlcom- a significant (p<.001) ratio of 1.3 between the
predicted probabilities are fond for length=1 vs length=10. When length=1
is compared to length=100 the ratio increase to 764 but is no longer
significant (p=.606). Code is listed below:
sysuse auto
logit foreign length
// RRR for length=1 vs length=10
nlcom (exp(1 * _b[length] + _cons) / (1+ exp(1 * _b[length] + _cons)))
/ ///
(exp(10 * _b[length] + _cons) / (1+ exp(10 * _b[length] +
_cons))) //
// RRR for length=1 vs length=100
nlcom (exp(1 * _b[length] + _cons) / (1+ exp(1 * _b[length] + _cons)))
/ ///
(exp(100 * _b[length] + _cons) / (1+ exp(100 * _b[length] +
_cons))) //
I might be doing something I shouldn't and I'm happy for any advice on how
to calculate RRRs with CI from the logit model above using auto.dta.
Michael
This is not an issue of model fit or any such. The issues raised re linear
probability models etc are valid, but they are different from what you are
reporting here, I think).