Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: RE: Panel data: large number of linear time trends
From
Austin Nichols <[email protected]>
To
[email protected]
Subject
Re: st: RE: Panel data: large number of linear time trends
Date
Fri, 24 Feb 2012 14:04:31 -0500
William Gui Woolston--
You have to detrend the outcome and each regressor, not just the outcome.
clear all
prog mydetrend, rclass byable(recall)
version 10.1
syntax varlist [if] [in], DETrend(varname)
tempvar eps
marksample touse
regress `varlist' if `touse'
predict double `eps' if e(sample), res
replace `detrend' = `eps' if e(sample)
end
webuse grunfeld
g i_dtr = .
g mv_dtr = .
by company: mydetrend invest year, det(i_dtr)
by company: mydetrend mvalue year, det(mv_dtr)
areg mv_dtr invest, abs(company)
areg mv_dtr i_dtr, abs(company)
reg mvalue c.invest c.year##i.company
You can speed this up by not running a lot of regressions, but instead
computing regression coefs and residuals by hand, as mentioned in
http://www.stata.com/statalist/archive/2011-09/msg01177.html
though for 3100 panels the solution given is fast enough.
One thing to think about--there is estimation error in the
panel-specific trends.
When you detrend each of the RHS variables, you are potentially
introducing a lot of measurement error that can bias their coefs:
xhat=x-bhat*time where bhat is measured with error.
I would think you would want some simulation evidence showing that not
estimating each of the county-level trends is worse than estimating
them, and still you would want to consider that coefs might be biased
toward zero.
On Fri, Feb 24, 2012 at 1:20 PM, Christopher Baum <[email protected]> wrote:
> <>
> I am estimating a panel data model, where the unit of observation is a
> county-year. There are roughly 3,100 counties in the United States,
> and I have data for 12 years.
>
> I wish to include linear county-time trends. That is, I want a
> separate time trend for each county.
>
> Estimating this model by "brute force" (by interacting time with a
> dummy for each county) would mean having an additional 3,100 variables
> to my model. Is there a more efficient way to estimate this model?
>
>
> The Frisch-Waugh-Lovell theorem (as discussed in the Baum-Schaffer-Stillman papers describing -ivreg2- on SSC) tells you that if you want to 'partial off' the effects of a set of variables, be they time trends, seasonals, etc. in computing a regression, you may either insert the appropriate regressors, or
> perform the transformation separately, take the residuals, and use them as the new response variable. So it the case of time trends to be estimated separately for each panel unit, you may detrend the response variable separately for each county, save the residuals (optionally adding back the county-level mean) and put that detrended dep.var. into your regression.
>
> One difficulty here is that statsby: (or just plain by:) can run the regressions, as Nick suggests, and save the coefficients, but it cannot also compute the predicted values or residuals. The simplest solution to that, as described in ISP below, is to write a simple 'wrapper' program that will do both the regression and prediction. E.g., based loosely on -myregress- (ssc type myregress.ado):
>
> clear all
> program mydetrend, rclass byable(recall)
> version 10.1
> syntax varlist [if] [in], DETrend(varname)
> tempvar eps
> marksample touse
> regress `varlist' if `touse'
> predict double `eps' if e(sample), res
> replace `detrend' = `eps' if e(sample)
> end
>
> webuse grunfeld
> g invest_dtr = .
> by company: mydetrend invest year, det(invest_dtr)
>
> The variable invest_dtr contains detrended invest. If you want it to have the same units as invest, compute the subsample means and add them to each subsample.
>
> Kit
>
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/