Dear Mark and Julia,
HT does not generate consistent estimators for the presence of
autocorrelation and/or heteroskedasticity. Section 2.3 of the paper gives
you consistency analysis. As you can see the consistent std errors are based
on homoskedastic case.
In other words, you have to work with fixed-effects estimators and
IV-between-effects estimators, steps 1 and 2. The goal is to build a HAC for
this estimators. Note IV were generated using FE, then the variance has to
control for that.
Best, Rodrigo.
----- Original Message -----
From: "Schaffer, Mark E" <[email protected]>
To: <[email protected]>
Sent: Monday, May 01, 2006 5:47 PM
Subject: RE: st: RE: Hausman taylor
Julia,
> -----Original Message-----
> From: [email protected]
> [mailto:[email protected]] On Behalf Of Julia Spies
> Sent: 29 April 2006 10:20
> To: [email protected]
> Subject: RE: st: RE: Hausman taylor
>
> Sorry, what I meant was the the overid test stats is not
> significant and running a hausman test to compare HT with GLS
> is significant. I just mixed it up. Apologies!
>
> Julia
>
> > --- Urspr�ngliche Nachricht ---
> > Von: "Schaffer, Mark E" <[email protected]>
> > An: <[email protected]>
> > Betreff: RE: st: RE: Hausman taylor
> > Datum: Sat, 29 Apr 2006 07:39:07 +0100
> >
> > Julia,
> >
> > > -----Original Message-----
> > > From: [email protected]
> > > [mailto:[email protected]] On Behalf Of Julia
> > > Spies
> > > Sent: 28 April 2006 23:51
> > > To: [email protected]
> > > Subject: Re: st: RE: Hausman taylor
> > >
> > > Dear Mark,
> > >
> > > with "improving the model" I mean that the over-identification test
> > > statistic comparing the FE model (I use areg with the cluster()
> > > option, since i identified autocorr. and heteroskedasticity) with
> > > the HT estimation is significant, which means - if I understand it
> > > correctly - that the correlation between the explanatory variables
> > > and the individual effects has been removed by the instrumentation.
> >
> > Apologies if I am misunderstanding what you have reported, but it's
> > the other way around. A large and significant overid stat is evidence
> > AGAINST your HT estimate. As usual with IV estimation, under the null
> > that the orthogonality conditions are statisfied (the instruments are
> > "valid"), the overid stat is distributed as chi-sq. A big stat and
> > rejection of the null
> > suggests that your orthogonality conditions are not satisfied, i.e.,
> > the instruments are not valid, i.e., your HT estimation is misspecified.
> >
> > --Mark
> >
> > > Of course, since I have the odd parameter estimates in the
> > > instrumented time-invariant variables (which cannot be estimated in
> > > the FE model), they don't enter the over-identification test.
I'm not sure this is quite right. Hausman-Taylor (1981, p. 1389) say that
"_all_ of the exogeneity information about X and Z is subject to test by
this procedure" [emphasis in the original], meaning the overid test they
give in their equation (2.2). Even though they aren't used to calculate the
test statistic, all the orthogonality conditions are part of the null, or so
they say.
> > > My question therefore was whether autocorr. and heteroskedasticity
> > > could produce these very high estimates or whether someone could
> > > think of any other source for the problem, and how I can correct for
> > > it in the HT estimation.
I am not sure, but the HT estimation may generate consistent parameter
estimates even in the presence of autocorrelation and heteroskedasticity,
and the problem may be that the var-cov estimate is wrong. This needs
checking, but if so, then you could address the problem by using
cluster-robust standard errors. This would give you SEs that are robust to
arbitrary autocorrelation and heteroskedasticity.
Unfortunately, -xthtaylor- doesn't support the -cluster- option. This be
might deliberate (i.e., the Stata programmers know that HT won't generate
consistent parameter estimates in the presence of AC or het), or it might
not. If not, then you could consider making a copy of -xthtaylor- (call
it, say -xthtaylor2-) and editing it so that it forces cluster-robust
standard errors. The way to do this is to go to the block that says
/* Hausman-Taylor estimator */
A few lines under that is a call to regress, using the old-fashioned syntax
(IVs in parentheses) for an IV estimation. You would add a cluster option
to that line. It's currently
reg `yvar_g' `list_g' `g_cons' /*
*/ (`xvar1_dm' `xvar2_dm' /*
*/ `xvar1_m' `zvar1' `g_cons') `wtopt' /*
*/ if `touse', nocons
and you would change this to
reg `yvar_g' `list_g' `g_cons' /*
*/ (`xvar1_dm' `xvar2_dm' /*
*/ `xvar1_m' `zvar1' `g_cons') `wtopt' /*
*/ if `touse', nocons cluster(`ivar')
^^^^^^^^^^^^^^^
^add this bit^
You would also need to change the line at the top of the file from
program xthtaylor, eclass byable(recall) sort
to
program xthtaylor2, eclass byable(recall) sort
Might work. Worth a thought, anyway.
HTH.
Cheers,
Mark
> > > Sorry for not making my point clear in the first e-mail. I will
> > > definitely try out Rodrigo's suggestions. Thank you very much for
> > > the advice!
> > >
> > > Best regards,
> > > Julia
> > >
> > >
> > > > --- Urspr�ngliche Nachricht ---
> > > > Von: "Schaffer, Mark E" <[email protected]>
> > > > An: <[email protected]>
> > > > Betreff: st: RE: Hausman taylor
> > > > Datum: Fri, 28 Apr 2006 22:51:29 +0100
> > > >
> > > > Julia,
> > > >
> > > > > -----Original Message-----
> > > > > From: [email protected]
> > > > > [mailto:[email protected]] On
> Behalf Of Julia
> > > > > Spies
> > > > > Sent: 28 April 2006 12:48
> > > > > To: [email protected]
> > > > > Subject: st: Hausman taylor
> > > > >
> > > > > Dear all,
> > > > >
> > > > > I'm quite a beginner with Stata and i'm trying to run a Hausman
> > > > > taylor regression. However, taking some (plausible) time-invariant
> > > > > variables as endogeneous results in outrageous parameter estimates
> > > > > for these variables.
> > > > > Nevertheless, the over-identification test suggests that
> > > > > instrumenting these variables has improved the model.
> > > >
> > > > This sounds odd ... what do you mean by "improving the model"?
> > > >
> > > > --Mark
> > > >
> > > > > Does
> > > > > anyone have an idea what the problem could be? I
> > > > > understand there is no option to correct for heteroskedasticity
> > > > > and
> > > > > autocorrelation.
> > > > > Does anyone know how to do it manually?
> > > > >
> > > > > Cheers,
> > > > > Julia
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/