Sounds risky to me.
As said, the R^2 is just not a guide to model
merit in this case. Otherwise I clearly can't
comment on your results, but I'd advise seeking out
e.g.
John A. Nelder. 1998.
The Selection of Terms in Response-Surface Models
-How Strong is the Weak-Heredity Principle?
The American Statistician 52(4): 15-318.
and following references from there.
Nick
[email protected]
carambas
>
> Thank you, Nick.
>
> Mine is a yield response model and since I am including
> mostly inputs as
> explanatory vars and some categorical dummies, so perhaps a
> model without
> const could work since no matter what I include, I get only
> good results
> with noconst.
>
> Cris
>
>
> Date: Mon, 9 Aug 2004 09:14:27 +0100
> From: "Nick Cox" <[email protected]>
> Subject: st: RE:
> On the R^2, your starting point is now a prediction of zero,
> not a prediction of the mean response.
>
> In a much simpler case, below, dropping the constant
> gives a higher R-sq but a totally ludicrous model. Why then
> does the R-sq look so good? Because the predictions
> - -- which range from 11 to 30 mpg -- are much closer to
> the data than a prediction of 0 than the predictions of
> the first model to the mean of -mpg-. Your model is more
> complicated, and I can't see your data, but I guess that
> the same applies. If there is a really good reason,
> like a law of physics, to force predictions through
> the origin, then do it. (One can certainly improve
> on a linear regression of -mpg- on -weight-, a secondary
> point.)
>
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/