Mark Schaffer <[email protected]> has a follow-up question about -ovtest-:
> My follow-up question is simple: why does the shifting and scaling used by
> Stata's -ovtest- introduce greater accuracy rather than, say,
> greater rounding error? (Either accuracy or error would remove the
> numerical collinearity.) The algebra doesn't help me here, since all three
> methods are algebraically equivalent. I'm guessing that there's probably a
> general principle about how best to maintain numerical precision, but I
> don't know what it might be.
Actually, the three methods you describe are not all algebraically equivalent
according to -_rmcoll- and -coldiag2-. The algebra I mentioned only shows us
that the regression models yield a statistically equivalent F test.
The direct approach and your center/rescale method after taking powers are
algebraically equivalent to each other, but -ovtest-'s center/rescale then
take powers is not.
Let's just look at x^2 and x^3, if the values of x are not near zero (say they
are all positive), then it is easy to see how x^2 and x^3 can become
numerically collinear--even if you center/rescale them after taking the
powers.
Now generate z from the centered/rescaled values of x; this results in z^2
always being positive whereas z^3 is negative where z is negative. There is
no mistaking them to be collinear in this case.
Incidentally, I do not think of this as an accuracy or numerical precision
issue. To me it is more like shifting x into regions where we are better
equipped to numerically distinguish between powers of x.
--Jeff
[email protected]
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/