Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Why F-test with regression output
From
John Antonakis <[email protected]>
To
[email protected]
Subject
Re: st: Why F-test with regression output
Date
Thu, 05 May 2011 13:57:33 +0200
Hi:
The F-test for all betas = 0 is useful only if it is theoretically
useful; otherwise, it doesn't mean much. Suppose I want to estimate the
effect of x on y and x is a "new kid" on the block--so, I stick in a
whole bunch of controls. It is possible that the overall F-test is not
significant because most of the controls don't do much. OK you'll say,
but then they were not well selected; however, if the theory suggest
that we must partial out the variance due to those controls and the
coefficient of x is significant but the F-test is not, I think that
these results are very meaningful.
I too think, as Joerg suggested, that the importance of the F-test is
probably due to psychological experimental research, where one or two
variables were exogenously manipulated, so the F-test there would
indicate whether the experiment worked (though again, it is possible
that one controls for a competing treatment, or many competing
treatments that are placebos that might not work and which might make
the F-test non significant).
Best,
J.
__________________________________________
Prof. John Antonakis
Faculty of Business and Economics
Department of Organizational Behavior
University of Lausanne
Internef #618
CH-1015 Lausanne-Dorigny
Switzerland
Tel ++41 (0)21 692-3438
Fax ++41 (0)21 692-3305
http://www.hec.unil.ch/people/jantonakis
Associate Editor
The Leadership Quarterly
__________________________________________
On 05.05.2011 06:15, Richard Williams wrote:
At 04:19 PM 5/4/2011, Steven Samuels wrote:
Nick, I've seen examples where every regression coefficient was
non-significant (p>0.05), but the F-test rejected the hypothesis that
all were zero. This can happen even when the predictors are
uncorrelated. So I don't consider the test superfluous.
Steve
I also find the omnibus test helpful.
If, say, there were a lot of p values of .06, it is probably very
likely that at least one effect is different from 0.
If variables are highly correlated, the omnibus F may correctly tell
you that at least one effect differs from 0, even if you can't tell
for sure which one it is.
In both of the above cases, if you just looked at P values for
individual coefficients, you might erroneously conclude that no
effects differ from zero when it is more likely that at least one
effect does.
If the omnibus F isn't significant, there may not be much point in
looking at individual variables. If you have 20 variables in the
model, one may be significant at the .05 level just by chance alone,
but the omnibus F probably won't be. That is, a fishing expedition for
variables could lead to a few coefficients that are statistically
significant but the omnibus F isn't.
Incidentally, you might just as easily ask why the Model Chi Square
gets reported in routines like logistic and ordinal regression. The
main advantage of Model Chi Square over omnibus F is that Model Chi
Square is easier to use when comparing constrained and unconstrained
models (e.g. if model 1 has x1 and x2, and model 2 has x1, x2, x3, and
x4, I can easily use the model chi-squares to test whether or not the
effects of x3 and/or x4 significantly differ from 0).
-------------------------------------------
Richard Williams, Notre Dame Dept of Sociology
OFFICE: (574)631-6668, (574)631-6463
HOME: (574)289-5227
EMAIL: [email protected]
WWW: http://www.nd.edu/~rwilliam
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/