Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Very high t- statistics and very small standard errors
From
Nick Cox <[email protected]>
To
[email protected]
Subject
Re: st: Very high t- statistics and very small standard errors
Date
Wed, 2 May 2012 15:38:13 +0100
I see. At best, that puts enormous trust in the order in which the
data arrive. At worst, it is meaningless.
To make it concrete, suppose you did this for the auto data bundled
with Stata. You might pick up something because the data come sorted
by the -make- variable, and it seems quite possible that cars from the
same manufacturer are not completely independent. But if that were the
substantive idea, it should be tested directly.
All that said, my advice is to throw the results of this test away. It
is not, and cannot be, a general purpose test for checking for
independence.
For the record, our friends at UCLA say (I am replacing your
incomplete reference)
http://www.ats.ucla.edu/stat/stata/webbooks/reg/chapter2/statareg2.htm
"When you have data that can be considered to be time-series you
should use the dwstat command that performs a Durbin-Watson test for
correlated residuals.
We don't have any time-series data, so we will use the elemapi2
dataset and pretend that snum indicates the time at which the data
were collected."
So their discussion and example do not endorse the application of this
test here.
Nick
On Wed, May 2, 2012 at 3:08 PM, Laurie Molina <[email protected]> wrote:
> Regarding DW, as in Chen (2003) I created and index variable for each
> observation, as if it where the "time" variable.
> Ok, I will look for clusters.
> Thanks again!
>
>
> Chen, X., Ender, P., Mitchell, M. and Wells, C. (2003). Regression with Stata,
>
> On Wed, May 2, 2012 at 8:34 AM, Nick Cox <[email protected]> wrote:
>> If your data are not time series, it is hard to see that Durbin-Watson
>> tests make any sense. (It is also puzzling how you managed to
>> calculate them.)
>>
>> Alan Feiveson's point was I think to wonder about any cluster or
>> clumping structure: for example, millions of people would often show
>> some similarities within families or communities.
>>
>> Nick
>>
>> On Wed, May 2, 2012 at 2:09 PM, Laurie Molina <[email protected]> wrote:
>>> Thank you all.
>>> I will try adding more variables to the model, and think on the
>>> economic vs statistical significance of the results (I will look for
>>> the appropiate null hypothesis, as opossed to the default zero).
>>> Regarding the independe of the observations I ran Durbin Watson
>>> (although my data is non time series), and the error term do not seem
>>> to be correlated among observations.
>>> Regards and thanks again,
>>> LM
>>>
>>> On Tue, May 1, 2012 at 7:36 PM, David Hoaglin <[email protected]> wrote:
>>>> Laurie,
>>>>
>>>> It's unusual to see such a large number of observations and so few
>>>> explanatory variables. Often, as the amount of data increases, the
>>>> complexity of the model grows. Do those 4 million observations
>>>> actually have no structure other than that described by the 6
>>>> explanatory variables?
>>>>
>>>> David Hoaglin
>>>>
>>>> On Mon, Apr 30, 2012 at 8:54 PM, Laurie Molina <[email protected]> wrote:
>>>>> Hi everybody,
>>>>> I'm running some OLS with around 4 million observations and 6
>>>>> explanatory variables.
>>
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/