Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
From | Austin Nichols <austinnichols@gmail.com> |
To | "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu> |
Subject | Re: st: Clustered Standard Errors vs HLM for Small Sample Project |
Date | Mon, 18 Nov 2013 15:19:26 -0500 |
For good inference, you want not only many clusters, but also clusters that are balanced (which means guidelines about 20 or 30 or 42 or 50 clusters are less than helpful): http://www.stata.com/meeting/13uk/nichols_crse.pdf When RE/HLM models and Cluster-Robust SE work well, they give similar answers, but in some circumstances where they work poorly, they can also give similar (wrong) answers: https://appam.confex.com/appam/2013/webprogram/Paper6337.html You need to describe in more detail the source of correlations in errors and regressors to get a good answer--on how to design a simulation to indicate which approach is likely to give the best inference in your setting. In his reply below, John Antonakis seems to be mixing up a comparison between FE and RE (ssc describe xtoverid) with a comparison between FE and pooled OLS with CRSE; whether or not you should use a fixed effects method is a more complicated question than any one test will answer, and depends very strongly on what you believe about measurement error in your predictors. On Mon, Nov 18, 2013 at 10:26 AM, John Antonakis <John.Antonakis@unil.ch> wrote: > Hi: > > You should not use terms like "HLM" (which is a program in addition to an > estimation method in some disciplines) without defining it (most here do not > use this program but Stata obviously). > > I guess I know what you are after, that is, whether you should estimate a > random-effects (multilevel model), versus a pooled model using OLS with a > cluster-robust estimate of the variance--. Before you do anything, and if > you have level 1 (i.e., within cluster varying predictors), then you should > be much more worried about omitted fixed-effects than just about robust > standard errors--which are important too. See: > > Halaby, C. N. 2004. Panel models in sociological research: Theory into > practice. Annual Review of Sociology, 30: 507-544. > > So, I would first check for omitted fixed-effects. If the Haumsan > endogeneity test (can be tested with the user written command -xtoverid- > from SSC) is significant, it means that he restrictions that your regressors > don't correlate with the uj (i.e., the fixed-effect error term) is rejected. > Then you either must model the fixed effects either with dummies or using > the Mundlak procedure: > > Antonakis, J., Bendahan, S., Jacquart, P., & Lalive, R. 2010. On making > causal claims: A review and recommendations. The Leadership Quarterly, > 21(6): 1086-1120. > > Next, as for the number of clusters ideally you'll have between 30-50 for > valid inference. > > Hth. > J. > > __________________________________________ > > John Antonakis > Professor of Organizational Behavior > Director, Ph.D. Program in Management > > Faculty of Business and Economics > University of Lausanne > Internef #618 > CH-1015 Lausanne-Dorigny > Switzerland > Tel ++41 (0)21 692-3438 > Fax ++41 (0)21 692-3305 > http://www.hec.unil.ch/people/jantonakis > > Associate Editor: > The Leadership Quarterly > Organizational Research Methods > __________________________________________ > > > On 18.11.2013 03:06, mkobren1@comcast.net wrote: >> >> I'm using STATA 10 and I'm trying to figure out whether to use clustered >> standard errors or HLM.I have 233 observations from agencies located in 10 >> different states. >> >> The minimum number of observations I have from a state is 3 and the >> maximum number of observations I have is 108 with an average >> of 23.3. I'm not interested in state level differences, I'm only >> interested in results from the agency level and I want to account for the >> fact that there may be some state level effects. >> >> The literature I've read so far doesn't seem to point me in any definite >> direction. The literature seems to say that HLM works best on larger >> datasets, but it also seems to say that you need at least 20 clusters for >> either method to be effective. Does anyone have a suggestion for which of >> these two methods I should use, or at least what I should consider in making >> my choice? Is there some other method I should use? >> >> Thank you in advance for your consideration. >> >> MK * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/