Note: This FAQ is for Stata 10 and older versions of Stata.
In Stata 11, the margins command replaced mfx.
I am only interested in obtaining a few of the marginal effects for a few
independent variables. How can I do that?
Title |
|
Obtaining marginal effects for a few independent variables |
Author |
May Boggess, StataCorp
|
We can choose the independent variables for which we want marginal effects
by using the varlist() option:
. sysuse auto, clear
(1978 automobile data)
. constraint define 1 price=weight
. cnsreg mpg price weight, constraint(1)
Constrained linear regression Number of obs = 74
F(1, 72) = 37.59
Prob > F = 0.0000
Root MSE = 4.7220
( 1) price - weight = 0
|
mpg | | Coefficient Std. err. t P>|t| [95% conf. interval] |
| | |
price | | -.0009875 .0001611 -6.13 0.000 -.0013086 -.0006664 |
weight | | -.0009875 .0001611 -6.13 0.000 -.0013086 -.0006664 |
_cons | | 30.36718 1.577958 19.24 0.000 27.22158 33.51278 |
|
.
mfx, predict(pr(20,30)) nose
Marginal effects after cnsreg
y = Pr(20<mpg<30) (predict, pr(20,30))
= .57557714
|
variable | | dy/dx X |
| | |
price | | -.0000651 6165.26 |
weight | | -.0000651 3019.46 |
|
.
mfx, predict(pr(20,30)) nose varlist(price)
Marginal effects after cnsreg
y = Pr(20<mpg<30) (predict, pr(20,30))
= .57557714
|
variable | | dy/dx X |
| | |
price | | -.0000651 6165.26 |
|
The varlist() option can understand abbreviations:
. webuse xmpl2, clear
. blogit deaths pop agecat exposed, or nolog
Logistic regression for grouped data Number of obs = 409
LR chi2(2) = 22.47
Prob > chi2 = 0.0000
Log likelihood = -142.6212 Pseudo R2 = 0.0730
|
_outcome | | Odds ratio Std. err. z P>|z| [95% conf. interval] |
| | |
agecat | | 4.216299 1.431519 4.24 0.000 2.167361 8.202223 |
exposed | | 1.404674 .4374454 1.09 0.275 .7629451 2.586175 |
_cons | | .0513818 .0170762 -8.93 0.000 .0267868 .0985593 |
|
Note: _cons estimates baseline odds.
.
mfx, predict(p)
Marginal effects after blogit
y = Pr(outcome) (predict, p)
= .11114585
|
variable | | dy/dx Std. err. z P>|z| [ 95% C.I. ] X |
| | |
agecat* | | .146902 .03367 4.36 0.000 .080908 .212896 .5 |
exposed* | | .0336358 .03074 1.09 0.274 -.026618 .09389 .5 |
|
(*) dy/dx is for discrete change of dummy variable from 0 to 1
.
mfx, predict(p) varlist(ag)
Marginal effects after blogit
y = Pr(outcome) (predict, p)
= .11114585
|
variable | | dy/dx Std. err. z P>|z| [ 95% C.I. ] X |
| | |
agecat* | | .146902 .03367 4.36 0.000 .080908 .212896 .5 |
|
(*) dy/dx is for discrete change of dummy variable from 0 to 1
It even understands the wildcard character *:
. sysuse auto, clear
(1978 automobile data)
. svyset [pweight=turn]
Sampling weights: turn
VCE: linearized
Single unit: missing
Strata 1: <one>
Sampling unit 1: <observations>
FPC 1: <zero>
. xi: svy: regress mpg weight length i.rep78
i.rep78 _Irep78_1-5 (naturally coded; _Irep78_1 omitted)
(running regress on estimation sample)
Survey: Linear regression
Number of strata = 1 Number of obs = 69
Number of PSUs = 69 Population size = 2,746
Design df = 68
F(6, 63) = 35.77
Prob > F = 0.0000
R-squared = 0.6995
|
| | Linearized |
mpg | | Coefficient std. err. t P>|t| [95% conf. interval] |
| | |
weight | | -.0023266 .0018753 -1.24 0.219 -.0060687 .0014154 |
length | | -.1123922 .0658759 -1.71 0.093 -.2438454 .019061 |
_Irep78_2 | | -.1298266 1.060557 -0.12 0.903 -2.246136 1.986483 |
_Irep78_3 | | -.5064828 .8987613 -0.56 0.575 -2.299933 1.286967 |
_Irep78_4 | | -.2704473 .9751403 -0.28 0.782 -2.216309 1.675415 |
_Irep78_5 | | 2.340767 2.190164 1.07 0.289 -2.029638 6.711171 |
_cons | | 49.4272 6.987183 7.07 0.000 35.48449 63.36991 |
|
.
mfx, predict(xb)
Marginal effects after svy:regress
y = Linear prediction (predict, xb)
= 20.815368
|
variable | | dy/dx Std. err. z P>|z| [ 95% C.I. ] X |
| | |
weight | | -.0023266 .00188 -1.24 0.215 -.006002 .001349 3107.1 |
length | | -.1123922 .06588 -1.71 0.088 -.241507 .016722 190.449 |
_Irep7~2* | | -.1298266 1.06056 -0.12 0.903 -2.20848 1.94883 .126366 |
_Irep7~3* | | -.5064828 .89876 -0.56 0.573 -2.26802 1.25506 .448653 |
_Irep7~4* | | -.2704473 .97514 -0.28 0.782 -2.18169 1.64079 .252367 |
_Irep7~5* | | 2.340767 2.19016 1.07 0.285 -1.95188 6.63341 .142753 |
|
(*) dy/dx is for discrete change of dummy variable from 0 to 1
.
mfx, predict(xb) varlist(_Irep*)
Marginal effects after svy:regress
y = Linear prediction (predict, xb)
= 20.815368
|
variable | | dy/dx Std. err. z P>|z| [ 95% C.I. ] X |
| | |
_Irep7~2* | | -.1298266 1.06056 -0.12 0.903 -2.20848 1.94883 .126366 |
_Irep7~3* | | -.5064828 .89876 -0.56 0.573 -2.26802 1.25506 .448653 |
_Irep7~4* | | -.2704473 .97514 -0.28 0.782 -2.18169 1.64079 .252367 |
_Irep7~5* | | 2.340767 2.19016 1.07 0.285 -1.95188 6.63341 .142753 |
|
(*) dy/dx is for discrete change of dummy variable from 0 to 1
The varlist() option can understand
time-series operators:
. sysuse auto, clear
(1978 automobile data)
. generate t=_n
. tsset t
Time variable: t, 1 to 74
Delta: 1 unit
. regress mpg l(1/3).length head turn
Source | | SS df MS | Number of obs = 71
| | | F(5, 65) = 18.06 |
Model | | 1409.28532 5 281.857063 | Prob > F = 0.0000 |
Residual | | 1014.60201 65 15.6092617 | R-squared = 0.5814 |
| | | Adj R-squared = 0.5492 |
Total | | 2423.88732 70 34.6269618 | Root MSE = 3.9509 |
|
mpg | | Coefficient Std. err. t P>|t| [95% conf. interval] |
| | |
length | | |
L1. | | -.0372479 .0236292 -1.58 0.120 -.0844386 .0099429 |
L2. | | .0557295 .0248165 2.25 0.028 .0061675 .1052915 |
L3. | | -.0283338 .0238291 -1.19 0.239 -.0759237 .0192561 |
| | |
headroom | | -.7995216 .6130181 -1.30 0.197 -2.023803 .4247601 |
turn | | -.876568 .1295939 -6.76 0.000 -1.135385 -.617751 |
_cons | | 60.39905 5.75474 10.50 0.000 48.90604 71.89205 |
|
.
mfx, varlist(l(1/3).length)
Marginal effects after regress
y = Fitted values (predict)
= 21.338028
|
variable | | dy/dx Std. err. z P>|z| [ 95% C.I. ] X |
| | |
L.length | | -.0372479 .02363 -1.58 0.115 -.08356 .009064 188.099 |
L2.len~h | | .0557295 .02482 2.25 0.025 .00709 .104369 188.338 |
L3.len~h | | -.0283338 .02383 -1.19 0.234 -.075038 .01837 188.775 |
|
Here’s another example:
. webuse labor, clear
. generate wc=(we>12)
. treatreg ww wa cit, treat(wc=wmed wfed cit) nolog
Treatment-effects model -- MLE Number of obs = 250
Wald chi2(3) = 4.25
Log likelihood = -703.17446 Prob > chi2 = 0.2361
|
| | Coefficient Std. err. z P>|z| [95% conf. interval] |
| | |
ww | | |
wa | | -.0121192 .019932 -0.61 0.543 -.0511852 .0269467 |
cit | | .1598895 .3545804 0.45 0.652 -.5350753 .8548544 |
wc | | 1.146174 .7460483 1.54 0.124 -.3160537 2.608402 |
_cons | | 2.379273 .90795 2.62 0.009 .5997236 4.158822 |
| | |
wc | | |
wmed | | .1359134 .0334665 4.06 0.000 .0703203 .2015065 |
wfed | | .0866335 .0295276 2.93 0.003 .0287605 .1445066 |
cit | | .5669854 .2075585 2.73 0.006 .1601782 .9737927 |
_cons | | -3.082537 .3866133 -7.97 0.000 -3.840285 -2.324789 |
| | |
/athrho | | .0546227 .1916721 0.28 0.776 -.3210477 .4302931 |
/lnsigma | | .9246445 .0449341 20.58 0.000 .8365752 1.012714 |
| | |
rho | | .0545685 .1911013 -.3104539 .4055663 |
sigma | | 2.520972 .1132777 2.308448 2.753062 |
lambda | | .1375656 .4824063 -.8079333 1.083065 |
|
LR test of indep. eqns. (rho = 0): chi2(1) = 0.08 Prob > chi2 = 0.7752
.
mfx, predict(xb)
Marginal effects after treatreg
y = Linear prediction (predict, xb)
= 2.27523
|
variable | | dy/dx Std. err. z P>|z| [ 95% C.I. ] X |
| | |
wa | | -.0121192 .01993 -0.61 0.543 -.051185 .026947 42.92 |
cit* | | .1598895 .35458 0.45 0.652 -.535075 .854854 .624 |
wc* | | 1.146174 .74605 1.54 0.124 -.316054 2.6084 .276 |
wmed | | 0 0 . . 0 0 9.136 |
wfed | | 0 0 . . 0 0 8.608 |
|
(*) dy/dx is for discrete change of dummy variable from 0 to 1
.
mfx, predict(xbtrt)
Marginal effects after treatreg
y = Linear prediction of wc (predict, xbtrt)
= -.74129165
|
variable | | dy/dx Std. err. z P>|z| [ 95% C.I. ] X |
| | |
wa | | 0 0 . . 0 0 42.92 |
cit* | | .5669854 .20756 2.73 0.006 .160178 .973793 .624 |
wc* | | 0 0 . . 0 0 .276 |
wmed | | .1359134 .03347 4.06 0.000 .07032 .201506 9.136 |
wfed | | .0866335 .02953 2.93 0.003 .02876 .144507 8.608 |
|
(*) dy/dx is for discrete change of dummy variable from 0 to 1
In the above example, we can see that, when the prediction equation is
xb, a variable that does not appear in the regression equation will
have zero derivative (marginal effect). The same is true when the
prediction equation is xbtrt: a variable that does not appear in the
treatment equation will have zero derivative. So, there is little point
having mfx calculate something that we know will be zero:
. mfx, predict(xb) varlist(wa cit wc)
Marginal effects after treatreg
y = Linear prediction (predict, xb)
= 2.27523
|
variable | | dy/dx Std. err. z P>|z| [ 95% C.I. ] X |
| | |
wa | | -.0121192 .01993 -0.61 0.543 -.051185 .026947 42.92 |
cit* | | .1598895 .35458 0.45 0.652 -.535075 .854854 .624 |
wc* | | 1.146174 .74605 1.54 0.124 -.316054 2.6084 .276 |
|
(*) dy/dx is for discrete change of dummy variable from 0 to 1
.
mfx, predict(xbtrt) varlist(wmed wfed cit)
Marginal effects after treatreg
y = Linear prediction of wc (predict, xbtrt)
= -.74129165
|
variable | | dy/dx Std. err. z P>|z| [ 95% C.I. ] X |
| | |
cit* | | .5669854 .20756 2.73 0.006 .160178 .973793 .624 |
wmed | | .1359134 .03347 4.06 0.000 .07032 .201506 9.136 |
wfed | | .0866335 .02953 2.93 0.003 .02876 .144507 8.608 |
|
(*) dy/dx is for discrete change of dummy variable from 0 to 1
The same is true, of course, when predicting the probability:
. mfx, predict(ptrt)
Marginal effects after treatreg
y = Pr(wc) (predict, ptrt)
= .22925831
|
variable | | dy/dx Std. err. z P>|z| [ 95% C.I. ] X |
| | |
wa | | 0 0 . . 0 0 42.92 |
cit* | | .1619747 .05464 2.96 0.003 .054877 .269072 .624 |
wc* | | 0 0 . . 0 0 .276 |
wmed | | .0411952 .00998 4.13 0.000 .021644 .060747 9.136 |
wfed | | .0262586 .00897 2.93 0.003 .008669 .043848 8.608 |
|
(*) dy/dx is for discrete change of dummy variable from 0 to 1
.
mfx, predict(ptrt) varlist(wmed wfed cit)
Marginal effects after treatreg
y = Pr(wc) (predict, ptrt)
= .22925831
|
variable | | dy/dx Std. err. z P>|z| [ 95% C.I. ] X |
| | |
cit* | | .1619747 .05464 2.96 0.003 .054877 .269072 .624 |
wmed | | .0411952 .00998 4.13 0.000 .021644 .060747 9.136 |
wfed | | .0262586 .00897 2.93 0.003 .008669 .043848 8.608 |
|
(*) dy/dx is for discrete change of dummy variable from 0 to 1