Steve,
> -----Original Message-----
> From: Steven Archambault [mailto:[email protected]]
> Sent: 13 August 2009 23:48
> To: [email protected]
> Cc: [email protected]; [email protected];
> Schaffer, Mark E
> Subject: Re: st: RE: Sargen-Hansen and instruments--RE vs. FE--Robust
>
> Is there a way to analyze instrumented panel data using
> random effects and robust standard errors? It seems the
> current programs don't allows this.
You can used -xtoverid- to do this. To get an overid stat after -xtivreg- with random effects, -xtoverid- reestimates everything internally, and if you ask for a robust overid stat, that means it reestimates internally with robust SEs.
If you add the option -noi- (for "noisily") to -xtoverid- after your estimation, you can see the results of the internal reestimation of the random effects model.
The only problem is ... the variable names in the -xtoverid- output will all be Stata internal macros with names like __0000001 and so forth. You can tell which is which by matching the values of the coefficients in the -xtoverid- output to the values in the output from your original estimation. A bit of a hassle but it should work.
Hope this helps.
Cheers,
Mark
> On Wed, Aug 12, 2009 at 10:28 AM, Steven
> Archambault<[email protected]> wrote:
> > Mark,
> >
> > Many thanks for your response, this clears up several
> questions. Yes,
> > I meant having a chi sq value that accepts the null that
> there is no
> > difference between RE and FE coefficients, implying the
> efficient RE
> > model is preferred.
> >
> > -Steve
> >
> >> On Wed, Aug 12, 2009 at 6:44 AM, Schaffer, Mark E
> <[email protected]> wrote:
> >>>
> >>> Steve,
> >>>
> >>> I'm not sure exactly what you mean in your question. For
> one thing,
> >>> rejection of the null means rejection of RE in favour of FE. But
> >>> assuming that's just a typo, here's an attempt at a
> restatement of
> >>> the question and an answer:
> >>>
> >>> 1. The difference between FE and RE can be stated in GMM
> terms (see
> >>> Hayashi's "Econometrics" for a good exposition). The FE
> estimator
> >>> uses only the orthogonality conditions that say the demeaned
> >>> regressor X is orthogonal to the idiosyncratic term e_ij. The RE
> >>> estimator uses these orthogonality conditions, plus the
> >>> orthogonality conditions that say that the mean of X for
> the panel
> >>> unit is orthogonaly to the panel error term u_j.
> >>>
> >>> 2. This is why the FE vs RE test is an overid test. The RE
> >>> estimator uses more orthogonality conditions, and so the
> equation is
> >>> overidentified. In the special case of classical iid errors, the
> >>> Hausman test is numerically the same as the Sargan-Hansen test.
> >>>
> >>> 3. Your question is, what happens if some of the Xs are
> endogenous
> >>> and you have some Zs as instruments? The answer is that the same
> >>> GMM framework encompasses this. You remove some of the
> demeaned Xs
> >>> from the orthogonality conditions and add some demeaned Zs to the
> >>> orthogonality conditions, and if you are using an RE
> estimator, you
> >>> also remove the panel unit means of the Xs from the orthogonality
> >>> conditions and add some panel unit means of Zs to them. (This is
> >>> the case for the EC2SLS RE estimator - it's a bit
> different for the
> >>> G2SLS estimator. The reason is that the G2SLS using a single
> >>> quasi-demeaned instrument Z instead of the demeaned Z and
> panel unit
> >>> mean Z separately, which is what EC2SLS does. I think
> the intuition
> >>> for EC2SLS is easier to get.)
> >>>
> >>> 4. If the FE model is overidentified, you'll now have an overid
> >>> test stat for it that tests the validity of the demeaned
> Zs as instruments.
> >>> If you're estimating an RE model, the overid test will test the
> >>> validity of the demeaned and panel unit means of the Zs
> and also the
> >>> panel unit means of the exogenous Xs.
> >>>
> >>> 5. If the overid test with endogenous regressors rejects the RE
> >>> model, you have a standard GMM problem: which of your
> orthogonality
> >>> conditions is invalid? It could be the demeaned Zs, or the panel
> >>> unit means of the Xs, or both, or whatever. In that
> case, you can
> >>> do a "GMM distance test" (aka "C test",
> "Difference-in-Sargan test",
> >>> etc.) where you compare the Sargan-Hansen test stat (from
> >>> -xtoverid-) after estimation with and without the orthognality
> >>> conditions that you think are the likely culprits. But
> you have to
> >>> decide ex ante which are the dubious ones - econometric
> theory can't tell you.
> >>>
> >>> Hope this helps.
> >>>
> >>> Yours,
> >>> Mark
> >>>
> >>> Prof. Mark Schaffer FRSE
> >>> Director, CERT
> >>> Department of Economics
> >>> School of Management & Languages
> >>> Heriot-Watt University, Edinburgh EH14 4AS tel +44-131-451-3494 /
> >>> fax +44-131-451-3296 http://ideas.repec.org/e/psc51.html
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> ________________________________
> >>>
> >>> From: Steven Archambault [mailto:[email protected]]
> >>> Sent: 12 August 2009 08:50
> >>> To: [email protected]; Schaffer, Mark E
> >>> Cc: [email protected]; [email protected]
> >>> Subject: Sargen-Hansen and instruments--RE vs. FE
> >>>
> >>>
> >>> A while back we discussed the use of the
> Sargen-Hansen test
> >>> to check if RE was an appropriate analysis to use for
> panel data. My
> >>> question now is regarding suspected endogeneity problems. If the
> >>> Sargen-Hansen statistic is such that you reject the null,
> in favor
> >>> of using the RE, does it follow that we do not need to
> worry about
> >>> explanatory variables being endogenous? My feeling is
> yes, here is
> >>> the logic. If I were to use xtivreg I would call the same over
> >>> identification test to see if my instruments are valid.
> So, if the
> >>> test already rejects before adding instruments, I should not need
> >>> the instruments.
> >>>
> >>> If I do use instruments, what is then a valid test to
> >>> determine if RE is an appropriate model to use (over FE)?
> >>>
> >>> Is my question clear?
> >>>
> >>> Thanks,
> >>> Steve
> >>>
> >>>
> >>>
> >>> On Sat, Jun 27, 2009 at 11:31 AM, Schaffer, Mark E
> >>> <[email protected]> wrote:
> >>>
> >>>
> >>> Steve,
> >>>
> >>> > -----Original Message-----
> >>> > From: [email protected]
> >>> > [mailto:[email protected]] On
> >>> Behalf Of
> >>> > Steven Archambault
> >>> > Sent: 27 June 2009 00:26
> >>> > To: [email protected];
> >>> [email protected];
> >>> > [email protected]
> >>> > Subject: st: Hausman test for clustered
> random vs.
> >>> fixed
> >>> > effects (again)
> >>> >
> >>> > Hi all,
> >>> >
> >>> > I know this has been discussed before,
> but in STATA
> >>> 10 (and
> >>> > versions before 9 I understand) the canned
> >>> procedure for
> >>> > Hausman test when comparing FE and RE
> models cannot
> >>> be run
> >>> > when the data analysis uses clustering (and by
> >>> default
> >>> > corrects for robust errors in STATA 10).
> >>> > This is the error received
> >>> >
> >>> > "hausman cannot be used with vce(robust),
> >>> vce(cluster cvar),
> >>> > or p-weighted data"
> >>> >
> >>> > My question is whether or not the
> approach of using
> >>> xtoverid
> >>> > to compare FE and RE models (analyzed using the
> >>> clustered and
> >>> > by default robust approach in STATA 10)
> is accepted
> >>> in the
> >>> > literature. This approach produces the
> >>> Sargan-Hansen stat,
> >>> > which is typically used with analyses that have
> >>> > instrumentalized variables and need an
> >>> overidentification
> >>> > test. For the sake of publishing I am
> wondering if
> >>> it is
> >>> > better just not to worry about
> heteroskedaticity,
> >>> and avoid
> >>> > clustering in the first place (even though
> >>> heteroskedaticity
> >>> > likely exists)? Or, alternatively one could just
> >>> calculate
> >>> > the Hausman test by hand following the clustered
> >>> analyses.
> >>> >
> >>> > Thanks for your insight.
> >>>
> >>> It's very much accepted in the literature. In the
> >>> -xtoverid- help file,
> >>> see especially the paper by Arellano and
> the book by
> >>> Hayashi.
> >>>
> >>> If you suspect heteroskedasticity or clustered
> >>> errors, there really is
> >>> no good reason to go with a test (classic Hausman)
> >>> that is invalid in
> >>> the presence of these problems. The GMM
> -xtoverid-
> >>> approach is a
> >>> generalization of the Hausman test, in the
> following
> >>> sense:
> >>>
> >>> - The Hausman and GMM tests of fixed vs. random
> >>> effects have the same
> >>> degrees of freedom. This means the result
> cited by
> >>> Hayashi (and due to
> >>> Newey, if I recall) kicks in, namely...
> >>>
> >>> - Under the assumption of homoskedasticity and
> >>> independent errors, the
> >>> Hausman and GMM test statistics are numerically
> >>> identical. Same test.
> >>>
> >>> - When you loosen the iid assumption and allow
> >>> heteroskedasticity or
> >>> dependent data, the robust GMM test is the natural
> >>> generalization.
> >>>
> >>> Hope this helps.
> >>>
> >>> Cheers,
> >>> Mark (author of -xtoverid-)
> >>>
> >>> > *
> >>> > * For searches and help try:
> >>> > * http://www.stata.com/help.cgi?search
> >>> > * http://www.stata.com/support/statalist/faq
> >>> > * http://www.ats.ucla.edu/stat/stata/
> >>> >
> >>>
> >>>
> >>> --
> >>> Heriot-Watt University is a Scottish charity
> >>> registered under charity number SC000278.
> >>>
> >>>
> >>> *
> >>> * For searches and help try:
> >>> * http://www.stata.com/help.cgi?search
> >>> * http://www.stata.com/support/statalist/faq
> >>> * http://www.ats.ucla.edu/stat/stata/
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> --
> >>> Heriot-Watt University is a Scottish charity registered under
> >>> charity number SC000278.
> >>>
> >>>
> >>> *
> >>> * For searches and help try:
> >>> * http://www.stata.com/help.cgi?search
> >>> * http://www.stata.com/support/statalist/faq
> >>> * http://www.ats.ucla.edu/stat/stata/
> >>
> >
>
--
Heriot-Watt University is a Scottish charity
registered under charity number SC000278.
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/