Steve,
> -----Original Message-----
> From: Steven Archambault [mailto:[email protected]]
> Sent: 03 July 2009 00:42
> To: Schaffer, Mark E
> Cc: [email protected]
> Subject: Re: st: RE: Hausman test for clustered random vs.
> fixed effects (again)
>
> Okay that makes sense. For a second there I thought I was not
> understanding the test. The different model specifications I
> use give p values (from the xtoverid test) of .1 to .25. Do
> you think values over say 20% make you less nervous about
> accepting RE results? My plan is to report both FE and RE
> models, suggesting that RE results can be considered valid
> given the p values.
>
> -Steve
Well, like I said, it's really a matter of taste. I'm perhaps more nervous and less gung ho than your average applied economist. 20% makes me less nervous than 10%, of course. But if you want to pursue this seriously, you should consider going down the route of testing specifically the subset of coefficients of interest.
--Mark
> On Thu, Jul 2, 2009 at 5:13 PM, Schaffer, Mark
> E<[email protected]> wrote:
> > Steve,
> >
> >> -----Original Message-----
> >> From: Steven Archambault [mailto:[email protected]]
> >> Sent: 03 July 2009 00:01
> >> To: Schaffer, Mark E
> >> Subject: Re: st: RE: Hausman test for clustered random vs.
> >> fixed effects (again)
> >>
> >> Wait a second, I thought with a Chi sq test we reject the
> null that
> >> the FE and RE coefficients are different when the critical
> value is
> >> such that the p-value is greater or equal to .05. This
> would give us
> >> a 5% (or more) significance that the null is rejected. We get this
> >> with a lower chi-sq value.
> >> It was with this logic that I am saying RE is the preferred model.
> >
> > There's nothing sacred about the 5% level. Some people,
> when constructing tables for their papers, put *s next to
> coefficients that are significant at the 10% level ... which
> happens to be your p-value.
> >
> > The bigger the contrasts, the smaller the p-value, and 10%
> implies contrasts that are large enough to make me nervous.
> Of course, de gustibus non est disputandum.
> >
> > If you want to take this further, you might consider
> focusing on the coefficients of interest, whatever they are.
> You may well find that the joint contrast between the RE and
> FE coefficients of interest is significant at a still smaller
> p-value (suggesting you dump RE), or is not at all
> significant (suggesting RE is preferred on efficiency grounds).
> >
> > -xtoverid- doesn't support tests of subsets of coefficients
> (I should consider adding this feature, I guess) but you can
> do the test by hand. It's described in the Arellano paper in
> the help file, and I think Vince Wiggins had a post on
> Statalist some time ago that describes how to do it.
> >
> > Cheers,
> > Mark
> >
> >>
> >> -Steve
> >>
> >>
> >>
> >> On Thu, Jul 2, 2009 at 4:47 PM, Schaffer, Mark
> >> E<[email protected]> wrote:
> >> > Steve,
> >> >
> >> >> -----Original Message-----
> >> >> From: Steven Archambault [mailto:[email protected]]
> >> >> Sent: 02 July 2009 22:41
> >> >> To: [email protected]; Schaffer, Mark E
> >> >> Cc: [email protected]; [email protected]
> >> >> Subject: Re: st: RE: Hausman test for clustered random vs.
> >> >> fixed effects (again)
> >> >>
> >> >> Mark,
> >> >>
> >> >> I should have commented on this earlier, but when I eye the
> >> >> coefficients for both the FE and RE results, I see that
> >> some of them
> >> >> are quite different from one another. However, the
> xtoverid result
> >> >> suggests RE is the one to use. Does anybody see this as
> a problem?
> >> >> The numerator of the Hausman wald test is the difference in
> >> >> coefficients of the two models. Is this not missed in
> the xtoverid
> >> >> approach?
> >> >
> >> > A few things here:
> >> >
> >> > - The "xtoverid approach" in this case is **identical** to
> >> the traditional Hausman test in concept. They are both
> >> vector-of-contrast tests, the contrast being between the 9
> FE and RE
> >> coefficients. The **only** difference in this case
> between the GMM
> >> stat reported by -xtoverid- and the traditional Hausman
> stat is that
> >> the former is cluster-robust. In addition to the
> references on this
> >> point that I cited in my previous posting, you should also
> check out
> >> Ruud's textbook, "An Introduction to Classical Econometric Theory".
> >> >
> >> > - The test has 9 degrees of freedom because 9 coefficients
> >> are being contrasted jointly. This means that some can indeed be
> >> quite different, but if the others are very similar then a test of
> >> the joint contrasts can be statistically insignificant.
> >> >
> >> > - The p-value reported by -xtoverid- is 10%, which a little
> >> worrisome. If you were to do a vector-of-contrast tests
> focusing on
> >> a subset of coefficients instead of all 9 (not supported by
> >> -xtoverid- but do-able by hand), you could well find that
> you reject
> >> the null at 5% or 1% or whatever. I don't think it's
> straightforward
> >> to conclude that RE is the estimator of choice.
> >> >
> >> > Hope this helps.
> >> >
> >> > Cheers,
> >> > Mark
> >> >
> >> >>
> >> >> I am posting my regression results to show what I am
> talking about
> >> >> more clearly.
> >> >>
> >> >> Thanks for your input.
> >> >> -Steve
> >> >>
> >> >>
> >> >> Fixed-effects (within) regression Number of obs
> >> >> = 404
> >> >> Group variable: id_code_id Number of
> >> groups =
> >> >> 88
> >> >>
> >> >> R-sq: within = 0.2304 Obs per
> >> >> group: min = 1
> >> >> between = 0.4730
> >> >> avg = 4.6
> >> >> overall = 0.4487
> >> >> max = 7
> >> >>
> >> >> F(9,87)
> >> >> = 2.47
> >> >> corr(u_i, Xb) = -0.9558 Prob > F
> >> >> = 0.0148
> >> >>
> >> >> (Std. Err. adjusted for 88
> clusters in
> >> >> id_code_id)
> >> >> --------------------------------------------------------------
> >> >> ----------------
> >> >> | Robust
> >> >> lnfd | Coef. Std. Err. t P>|t|
> >> [95% Conf.
> >> >> Interval]
> >> >> -------------+------------------------------------------------
> >> >> ----------
> >> >> -------------+------
> >> >> lags | -.0267991 .0185982 -1.44 0.153 -.063765
> >> >> .0101668
> >> >> lagk | .0964571 .0353269 2.73 0.008
> >> >> .0262411 .166673
> >> >> lagp | .2210296 .1206562 1.83 0.070
> >> >> -.0187875 .4608468
> >> >> lagdr | -.0000267 .0000251 -1.06 0.291 -.0000767
> >> >> .0000232
> >> >> laglurb | .3483909 .1234674 2.82 0.006 .102986
> >> >> .5937957
> >> >> lagtra | .1109513 .1267749 0.88 0.384
> >> >> -.1410275 .3629301
> >> >> lagte | .0067764 .004166 1.63 0.107
> >> >> -.0015039 .0150567
> >> >> lagcr | .0950221 .0683074 1.39 0.168
> >> >> -.0407463 .2307905
> >> >> lagp | .0343752 .1291378 0.27 0.791
> >> >> -.2223001 .2910506
> >> >> _cons | 4.316618 1.996618 2.16 0.033
> >> >> .348124 8.285112
> >> >> -------------+------------------------------------------------
> >> >> ----------
> >> >> -------------+------
> >> >> sigma_u | .44721909
> >> >> sigma_e | .0595116
> >> >> rho | .98260039 (fraction of variance due to u_i)
> >> >> --------------------------------------------------------------
> >> >> ----------------
> >> >>
> >> >>
> >> >>
> >> >> Random-effects GLS regression Number of obs
> >> >> = 404
> >> >> Group variable: id_code_id Number of
> >> groups =
> >> >> 88
> >> >>
> >> >> R-sq: within = 0.1792 Obs per
> >> >> group: min = 1
> >> >> between = 0.5074
> >> >> avg = 4.6
> >> >> overall = 0.5017
> >> >> max = 7
> >> >>
> >> >> Random effects u_i ~ Gaussian Wald chi2(9)
> >> >> = 48.97
> >> >> corr(u_i, X) = 0 (assumed) Prob > chi2
> >> >> = 0.0000
> >> >>
> >> >> (Std. Err. adjusted for
> clustering on
> >> >> id_code_id)
> >> >> --------------------------------------------------------------
> >> >> ----------------
> >> >> | Robust
> >> >> lnfd | Coef. Std. Err. z P>|z|
> >> [95% Conf.
> >> >> Interval]
> >> >> -------------+------------------------------------------------
> >> >> ----------
> >> >> -------------+------
> >> >> lags | -.01138 .0135958 -0.84 0.403 -.0380274
> >> >> .0152673
> >> >> lagk | .0115314 .0180641 0.64 0.523
> >> >> -.0238735 .0469363
> >> >> lagp | .2551701 .119322 2.14 0.032
> >> >> .0213033 .4890369
> >> >> lagdr | -6.17e-06 .0000153 -0.40 0.686 -.0000361
> >> >> .0000238
> >> >> laglurb | .0657802 .0153923 4.27 0.000 .0356119
> >> >> .0959486
> >> >> lagtra | .0022183 .0579203 0.04 0.969
> >> >> -.1113034 .11574
> >> >> lagte | .0048012 .0016128 2.98 0.003
> >> >> .00164 .0079623
> >> >> lagcr | .1051833 .045994 2.29 0.022
> >> >> .0150368 .1953298
> >> >> lagp | .184373 .1191063 1.55 0.122
> >> >> -.0490711 .4178171
> >> >> _cons | 9.071133 .2322309 39.06 0.000
> >> >> 8.615968 9.526297
> >> >> -------------+------------------------------------------------
> >> >> ----------
> >> >> -------------+------
> >> >> sigma_u | .10617991
> >> >> sigma_e | .0595116
> >> >> rho | .76095591 (fraction of variance due to u_i)
> >> >> --------------------------------------------------------------
> >> >> ----------------
> >> >>
> >> >> . xtoverid;
> >> >>
> >> >> Test of overidentifying restrictions: fixed vs random effects
> >> >> Cross-section time-series model: xtreg re robust Sargan-Hansen
> >> >> statistic 14.684 Chi-sq(9) P-value = 0.1000
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> On Sat, Jun 27, 2009 at 11:31 AM, Schaffer, Mark
> >> >> E<[email protected]> wrote:
> >> >> > Steve,
> >> >> >
> >> >> >> -----Original Message-----
> >> >> >> From: [email protected]
> >> >> >> [mailto:[email protected]] On Behalf
> >> Of Steven
> >> >> >> Archambault
> >> >> >> Sent: 27 June 2009 00:26
> >> >> >> To: [email protected]; [email protected];
> >> >> >> [email protected]
> >> >> >> Subject: st: Hausman test for clustered random vs. fixed
> >> >> >> effects
> >> >> >> (again)
> >> >> >>
> >> >> >> Hi all,
> >> >> >>
> >> >> >> I know this has been discussed before, but in STATA 10
> >> >> (and versions
> >> >> >> before 9 I understand) the canned procedure for Hausman
> >> test when
> >> >> >> comparing FE and RE models cannot be run when the data
> >> >> analysis uses
> >> >> >> clustering (and by default corrects for robust errors
> >> in STATA 10).
> >> >> >> This is the error received
> >> >> >>
> >> >> >> "hausman cannot be used with vce(robust),
> vce(cluster cvar), or
> >> >> >> p-weighted data"
> >> >> >>
> >> >> >> My question is whether or not the approach of using
> xtoverid to
> >> >> >> compare FE and RE models (analyzed using the clustered and
> >> >> by default
> >> >> >> robust approach in STATA 10) is accepted in the
> >> literature. This
> >> >> >> approach produces the Sargan-Hansen stat, which is
> >> typically used
> >> >> >> with analyses that have instrumentalized variables
> and need an
> >> >> >> overidentification test. For the sake of publishing I am
> >> >> wondering if
> >> >> >> it is better just not to worry about heteroskedaticity,
> >> and avoid
> >> >> >> clustering in the first place (even though
> >> >> heteroskedaticity likely
> >> >> >> exists)? Or, alternatively one could just calculate the
> >> >> Hausman test
> >> >> >> by hand following the clustered analyses.
> >> >> >>
> >> >> >> Thanks for your insight.
> >> >> >
> >> >> > It's very much accepted in the literature. In the
> >> -xtoverid- help
> >> >> > file, see especially the paper by Arellano and the book
> >> by Hayashi.
> >> >> >
> >> >> > If you suspect heteroskedasticity or clustered errors,
> >> >> there really is
> >> >> > no good reason to go with a test (classic Hausman) that is
> >> >> invalid in
> >> >> > the presence of these problems. The GMM -xtoverid-
> >> approach is a
> >> >> > generalization of the Hausman test, in the following sense:
> >> >> >
> >> >> > - The Hausman and GMM tests of fixed vs. random effects
> >> >> have the same
> >> >> > degrees of freedom. This means the result cited by Hayashi
> >> >> (and due
> >> >> > to Newey, if I recall) kicks in, namely...
> >> >> >
> >> >> > - Under the assumption of homoskedasticity and independent
> >> >> errors, the
> >> >> > Hausman and GMM test statistics are numerically identical.
> >> >> Same test.
> >> >> >
> >> >> > - When you loosen the iid assumption and allow
> >> >> heteroskedasticity or
> >> >> > dependent data, the robust GMM test is the natural
> >> generalization.
> >> >> >
> >> >> > Hope this helps.
> >> >> >
> >> >> > Cheers,
> >> >> > Mark (author of -xtoverid-)
> >> >> >
> >> >> >> *
> >> >> >> * For searches and help try:
> >> >> >> * http://www.stata.com/help.cgi?search
> >> >> >> * http://www.stata.com/support/statalist/faq
> >> >> >> * http://www.ats.ucla.edu/stat/stata/
> >> >> >>
> >> >> >
> >> >> >
> >> >> > --
> >> >> > Heriot-Watt University is a Scottish charity registered
> >> >> under charity
> >> >> > number SC000278.
> >> >> >
> >> >> >
> >> >> > *
> >> >> > * For searches and help try:
> >> >> > * http://www.stata.com/help.cgi?search
> >> >> > * http://www.stata.com/support/statalist/faq
> >> >> > * http://www.ats.ucla.edu/stat/stata/
> >> >> >
> >> >>
> >> >
> >> >
> >> > --
> >> > Heriot-Watt University is a Scottish charity registered
> >> under charity
> >> > number SC000278.
> >> >
> >> >
> >>
> >
> >
> > --
> > Heriot-Watt University is a Scottish charity registered
> under charity
> > number SC000278.
> >
> >
>
--
Heriot-Watt University is a Scottish charity
registered under charity number SC000278.
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/