| |
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: RE: Why not always specify robust standard errors?
From |
"Rodrigo A. Alfaro" <[email protected]> |
To |
<[email protected]> |
Subject |
Re: st: RE: Why not always specify robust standard errors? |
Date |
Tue, 13 Feb 2007 19:11:41 -0500 |
Hi all,
I downloaded the paper from http://www.stat.berkeley.edu/~census/mlesan.pdf,
guessing that is a reliable version. It seems for me that for nonlinear
model Maarten is right. In english, models like Logit or Probit are
complicated to justified with robust standard error when the researcher is
not sure of the underlying model. But for linear models, in particular the
OLS proposed in the beginning of the discussion I think that there is not
too much problem (Just for fun: it is interesting to note that each of these
\hat e(i)^2 are not consistent but the weighted average by x's is). In the
OLS, whatever is the underlying model, the standard asymptotic approach
gives us that the t-stats are asymptotically normal. As pointed in the
paper, probably the finite sample t is not close to a t-student. In
conclusion, it seems to me reasonable to apply robust to OLS regarding that
doing that you could lose efficiency and other measures availables (like
AIC, BIC, R2, etc) under the case of the model does not have
heteroscedasticity. I hope that the researcher does not get a strongly
different answer, if so I will take a second look of the specification of
the model, at the end heteroscedasticity is a signal of our ignorance of
true process.
But my original motivation of writing this email is the following. Consider
that the model in study cannot be estimated by OLS, due to some endogeneity
of the regressors. Are you still applying robust anyway? some of the
discussion has been analyzed by Baum, Schaffer and Stillman in their paper
about -ivreg2-. The analyze the problem of choosing between IV and GMM (two
steps). I took the analysis starting from the fact of the researcher will
"click" (or type) the robust option anyway. It is clear that robustness does
not work in frameworks where the estimator itself is seriously biased. For
example, when the degree of overidentification and the degree of endogeneity
are large none of these estimators are reliables, giving as well very
imprecise standard errors and making the inference completly wrong. Loosely
speaking it seems to me that others problems in the model such as quality
and number of instruments are more relevant than the heteroskedascity
itself.
I hope to get a lot of replies :-)
Rodrigo.
----- Original Message -----
From: "Richard Williams" <[email protected]>
To: <[email protected]>
Sent: Tuesday, February 13, 2007 5:59 PM
Subject: RE: st: RE: Why not always specify robust standard errors?
At 12:26 PM 2/13/2007, Maarten Buis wrote:
If you think your model is correct then it makes no sense to use robust
standard errors. Note that the model assumes no heteroscedasticity in
the population, so the fact that we always find some heteroskedasticity
in our samples is no argument. You could test it of course, but since
we are now in ``purist land'' we would have serious troubles with
performing tests based on the model that was subsequently selected,
since now our conclusions are based on a sequence of tests...
Thanks Maarten. I'm no doubt betraying my statistical ignorance here, but
is that the correct definition of "correct?" i.e. does "correct" mean no
heteroskedasticity? Or is no hetero just a requirement for OLS to be the
optimal method for estimating the model? It seems to me that a model
could be correct in that Y is a linear function of the Xs and all relevant
Xs are included. The additional requirement of homoskedastic errors is a
requirement for OLS estimates to be BLUE. But, if errors are
heteroskedastic, we can use another method, like WLS. Or, we can content
ourselves with using robust standard errors which do not require that the
errors be iid.
In any event, in practice probably every model will be at least a little
mis-specified and/or have error terms that aren't perfectly iid. So, why
not always use robust? One potential problem, I think, is that robust
standard errors tend to be larger. Perhaps unnecessarily relaxing the iid
assumption has similar effects to including extraneous variables -
estimates will remain unbiased but adding unnecessary junk to the model
can cause standard errors to go up.
You know, this is one of the problems with using Stata. I never used to
have these kinds of problems with SPSS, because SPSS doesn't let you
estimate robust standard errors!
-------------------------------------------
Richard Williams, Notre Dame Dept of Sociology
OFFICE: (574)631-6668, (574)631-6463
FAX: (574)288-4373
HOME: (574)289-5227
EMAIL: [email protected]
WWW (personal): http://www.nd.edu/~rwilliam
WWW (department): http://www.nd.edu/~soc
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/