Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: question on zero inflated regression
From
Maarten buis <[email protected]>
To
[email protected]
Subject
Re: st: question on zero inflated regression
Date
Thu, 17 Feb 2011 10:33:50 +0000 (GMT)
--- On Thu, 17/2/11, rachel grant wrote:
> In my case, two of seven variables were significant
> predictors of zeroes (both temperature), and that makes
> sense because at low temperatures amphibians cannot move.
> 1. If I change the order of the variables sometimes the p
> value of each variable changes although the overall LR
> and P for the model remains similar. Why does this happen
> and what can i do about it?
Sounds like your two temparature variables are highly
colinear. In those cases there is just very little
information in your data that can be used to distinguish
between the effects of these two variables. You could
take a look at -orthog- (see: -help orthog-) to transform
these variables such that they are less correlated.
Alternatively, you could take the position that if they
are that correlated any one of them will contain most of
the relevant information and you can just leave the other
out.
> 2. when I try to get a better fitting model by removing
> nonsignificant variables the log-likelihood decreases
> slightly. I am not sure why this happens or what to do
That is exactly what should happen. The fact that a effect
is non-significant does not mean that the effect is really
0. In fact, it is highly unlikely that the effect will be
exactly 0. The fact that you included it in your model
suggests that you thought that it could effect the outcome.
Such variables will probably all effect the outcome, the
question that significance tests answer is whether you
collected enough information to detect that effect. I
admid that this is a rather cynical interpretation of
statistical testing, but it is not wrong. It is good to
keep in mind that we usually test a hypothesis that we
already know cannot be true. For your case that means that
the likelihood should be slightly influenced when you
leave out these variables.
Typically I would leave them in my model. If I thought it
was worth while to look at them, then I should tell my
audience I did that and show them that the effects where
non-significant. The easiest way to do that is to just
leave them in my model.
Hope this helps,
Maarten
--------------------------
Maarten L. Buis
Institut fuer Soziologie
Universitaet Tuebingen
Wilhelmstrasse 36
72074 Tuebingen
Germany
http://www.maartenbuis.nl
--------------------------
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/