Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Interpreting marginal effects for binary variables in multinomial logit
From
Austin Nichols <[email protected]>
To
[email protected]
Subject
Re: st: Interpreting marginal effects for binary variables in multinomial logit
Date
Wed, 13 Jun 2012 13:54:53 -0400
Julian Runge <[email protected]>:
Your interpretation sounds correct, but such atmeans marginal effects
are meaningless.
Consider the second command, or equivalently a logit of y==2 on x1 and
x2 binary.
The marginal effect is dp/dX for x1 evaluated at x2==0.7, say.
No one in the data actually has x2==0.7, so comparing predicted probabilities
for x1==0 and x2==0.7 to x1==1 and x2==0.7 makes no real sense.
In practice, you often get something similar to a more sensible marginal effect,
but that does not make it right to compute predictions for a nonlinear model at
covariate patterns that are impossible to observe.
It's not an "average marginal effect at the average" but simply
a "marginal effect at the average" since the other x vars are fixed.
The problem is that average of a vector of binary predictors is a
terrible point at which to evaluate marginal effects.
I.e. your use of the words "for a representative individual"
implies such a person might be 70% a college graduate, or
10% pregnant, for example.
Are the binary x vars related in any way?
Include interactions or other logical dependencies?
If so, you have even worse problems.
On Wed, Jun 13, 2012 at 10:50 AM, Julian Runge
<[email protected]> wrote:
> Hello!
>
> Two brief (closely related) questions that I could not find a definite
> answer to yet, neither in the literature nor in the discussion with peers. I
> would really appreciate your input, especially on question 1:
>
> 1)
> My model has a categorical dependent variable and all independent variables
> are binary. I used a multinomial logit model with y={0, 1, 2} and 0 as base
> outcome to estimate the model. After running the regression, I applied the
> following commands to get marginal effects:
>
> margins, predict(outcome(1)) dydx( x1 x2 ... ) atmeans
> margins, predict(outcome(2)) dydx( x1 x2 ... ) atmeans
>
> Now I am unsure how to interpret the marginal effects. I would do as
> follows:
>
> It is the ceteris paribus mean effect for a discrete change in the
> respective binary independent variable from zero to one for a representative
> individual (in terms of “being average" on all variables, i.e. the
> covariates are fixed at their mean) in the sample. Let us consider an
> example to make this more accessible: The marginal effect on x1 for category
> y=1 tells us that, ceteris paribus, a subject that answers “yes” (x1=1)
> instead of “no” (x1=0) has a 0.0a (a%) higher probability to be part of
> category y=1.
>
> --> Am I getting this right?
>
>
> 2)
> A credible online source noted the following: "The default behavior of
> margins is to calculate average marginal effects rather than marginal
> effects at the average or at some other point in the space of regressors."
> Taking this into account I would think that I am calculating an "average
> marginal effect at the average" above. Is that correct?
>
>
> Thank you in advance,
> Julian
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/