Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Sequential Probit
From
Maarten buis <[email protected]>
To
[email protected]
Subject
Re: st: Sequential Probit
Date
Fri, 4 Mar 2011 13:11:22 +0000 (GMT)
--- On Fri, 4/3/11, Stephen Jenkins wrote:
> Just to confirm Maarten's remark that tastes differ:
> (i) I am less confident than he is that logit estimates are
> easier to interpret than probit estimates. Much of this
> depends on whether you are sure that you (and your target
> audience) understand what odds ratios are. In my opinion,
> they are more poorly understood than most quantitative
> sociologists hope or assume.
We agree that one needs to be careful to make sure that odds
ratios are properly understood when you present them and that
there are many examples where that has not happened. A trick
that often works for me is to also include the baseline odds in
the table of results, and start the discussion with that. That
is a natural way of "refreshing" the audience's memory of what
an odds is (expected number of "successes" per "failure"). The
next coefficient you discuss will allow you to extend it by
saying that and odds ratio is a very well chosen name (unusual
in statistics) as it literaly is a ratio of odds. After that
you can move more quickly through your results. (A more
detailed discussion of this trick and how to do it in Stata is
here: <http://www.stata.com/statalist/archive/2011-02/msg00785.html>)
> It's not that probit parameter estimates are easier to
> understand; rather I suggest working in the probability
> metric. That is, look at the implications of the estimates
> using marginal effects, average marginal effects or
> predicted probabilities more generally (in Stata, think
> -margins-).
There is no doubt that the various forms of marginal effects
and predicted probabilties are useful. I have, however, two
reservations. First, if all you are going to do is interpret
a linear approximation of a non-linear model, then why not
cut out the middle man and directly estimate a linear
probabilty model? Second, marginal effects are only easy in
relatively simple models. As soon as you add things like
interaction terms, odds ratios tend be a lot simpler
because the logit model is linear in the log(odds), e.g.:
<http://www.maartenbuis.nl/publications/interactions.html>
> (ii) how to treat unobserved heterogeneity is of course
> difficult -- it is unobserved! A multivariate probit
> model with sample selection (cf. Cappellari and Jenkins
> article in Stata Journal (2006), 6(2), free
> download) is one way to proceed. The cost is the
> assumption of joint normality (trivariate normal in the
> poster's case).
I agree, though given my tastes I would have put the
emphasis a bit differently. The link to that article is
<http://www.stata-journal.com/article.html?article=st0101>
> (iii) This way of modelling the heterogeneity is
> conventional, but of course the specification is a> maintained
> assumption (as Maarten stresses). On the other hand, the
> implicit heterogeneity model that he assumes in his own
> sequential logit package is unclear to me from his paper.
> The model that he implicitly tests against is also a
> maintained assumption. (I think it's a single factor model
> -- i.e. with the latent errors perfectly correlated and with
> a Normal marginal distribution. No doubt Maarten can correct
> me.)
There are two options in -seqlogit-:
By default it estimates a regular sequential logit, which
means that the error terms across transitions are uncorrelated.
I think of that as literaly modelling the odds given only the
variables in the model. This is a slightly different way of
saying what was my first strategy in this post:
<http://www.stata.com/statalist/archive/2011-03/msg00231.html>.
Alternatively one can estimate the model while assuming a
certain scenario for the unobserved heterogeneity. This assumes
that there is one unobserved variable (which one could think of
as a composite of a set of variables) that influences each
transition. One sets the distribution at the first transition
(normal or a discrete distribution). Due to selection the
distribution at later transitions will deviate from the
distribution at the first transition and selection will result
in a negative correlation between the observed and unobserved
variables (these changes in the distribution and correlation
are derived from the model not specified by the user). Finally,
the distribution of the unobserved variable at the first
transitions is standardized to have a standard deviation of 1,
and one needs to choose the size of the effect at each
transition. The logic behind this is that this set-up allows
for a range of scenarios that can help investigate what the
potential influence of unobserved heterogeneity could be, in
a way similar to the robustness check you propose below.
> (iv) Whatever, it is now relatively straightforward to
> explore in Stata what happens using either approach. That sort
> of robustness checking is useful.
I agree.
-- Maarten
--------------------------
Maarten L. Buis
Institut fuer Soziologie
Universitaet Tuebingen
Wilhelmstrasse 36
72074 Tuebingen
Germany
http://www.maartenbuis.nl
--------------------------
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/