Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: How to estimate Fixed and random effects for a long panel dataset.
From
Herman Haugland <[email protected]>
To
[email protected]
Subject
Re: st: How to estimate Fixed and random effects for a long panel dataset.
Date
Sat, 3 Aug 2013 21:23:29 +0200
Thank you for answering James!
I am trying to estimate beta(equity) from data from 5 banks. My model
looks like this:
Just in case it helps, my model looks something like this:
beta = L.leverage L.ROA L.RWAA i.year D(ummy) L.LeverageD L.ROAD L.RWAAD
* Given my data, 4 of the banks have very similar characteristics,
therefore the Dummy asks: Is this bank A (0) or bank B (1)?
* RWAA = Risk-weighted assets / Average total assets
I have 5 banks, 61 quarters, and 16-1=15 year-dummies.
The original study, on which I am basing my methodology, has a model like this:
beta = L.leverage // The authors controlled for additional
variables, but found them not significant.
* 6 banks, 38 half-years, 19-1=18 year-dummies.
As for the clustering problem, is completely related to the fact that
I have a long panel. In this case, vce(cluster id) just doesn't work.
As for using a VAR, there I have no expertise whatsoever. Would you
say that it's the only, or best option given my data?
Thanks.
Med vennlig hilsen / Best regards,
Hernan Aros
________________
Contact Information:
Tel: +47 930 289 69
E-mail: [email protected]
LinkedIn: /in/hermanhaugland
On Sat, Aug 3, 2013 at 8:43 PM, James Bailey <[email protected]> wrote:
> Herman,
>
> First, I wonder what controls you are using- this could be the true cause
> of the degrees of freedom issue.
>
> If it really is a clustering issue, then Stata may be trying to stop you
> for your own good. Mostly Harmless Econometrics section 8.2.3 discusses how
> clustered standard errors are biased if there are fewer than 42 clusters,
> and you are way under 42.
> They discuss possible solutions, one of which is bootstrapped errors.
>
> Finally, with such a large T and small N, it may make sense to abandon
> panel models entirely and use a time series technique like Vector
> Autoregression instead.
>
> Best,
> James Bailey
> Temple University Department of Economics
>
> On Sat, Aug 3, 2013 at 5:27 AM, Herman Haugland
> <[email protected]> wrote:
>> Dear all,
>>
>> I think I have sent this e-mail before, but I don't know if it made it
>> through Majordomo.
>>
>>
>> I have a long panel dataset, meaning my N is much smaller than my T.
>> I have N = 5, T = 61. I am trying to perform OLS, Fixed-effects and
>> Random-effects analysis, using vce(cluster id).
>>
>> I tried to estimate my model using xtreg for FE and RE, but I get an
>> error related to the fact that I do not have enough degrees of freedom
>> for performing the estimation.
>>
>> This is what I get:
>>
>> Wald chi2(4) = .
>> Prob > chi2 = .
>>
>>
>> Stata sends me here for help: help j_robustsingular // My case is
>> explained under the title "Are you using a svy estimator or did you
>> specify the vce(cluster clustvar) option?"
>>
>> So, after reading that, I have assumed that I cannot trust the output
>> of that estimation, because the errors might be biased.
>>
>> First question: Am I right on thinking that?
>>
>> In addition, in the book "Microeconometrics using Stata", the author
>> clearly indicates that the xtreg command, with the vce(cluster id)
>> option for calculating robust errors, is mostly appropriate for short
>> panels, which is not my case.
>>
>> An alternative is to use the command xtregar for estimating random and
>> fixed effects, which is based on an AR(1) process for the errors.
>> However, I tested using xtserial, and the errors do not show serial
>> autocorrelation. However, the xtregar command has the option rhof(#),
>> where # indicates the desired rho value (AR(rho)).
>>
>> My main questions are:
>>
>> 1) What is the right way to calculate Fixed and Random Effects for a
>> long panel dataset, in which the number of variables is larger than
>> the N?
>>
>> 2) Would specifying rho = 0 completely eliminate the AR(1) process for
>> the errors, and leave me with an estimation that fits my data?
>>
>>
>> Thank you for your answers.
>>
>> Best regards,
>> Herman Haugland
>> *
>> * For searches and help try:
>> * http://www.stata.com/help.cgi?search
>> * http://www.stata.com/support/faqs/resources/statalist-faq/
>> * http://www.ats.ucla.edu/stat/stata/
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/faqs/resources/statalist-faq/
> * http://www.ats.ucla.edu/stat/stata/
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/