Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: RE: st: Testing to compare goodness of fit
From
[email protected]
To
[email protected]
Subject
Re: RE: st: Testing to compare goodness of fit
Date
Tue, 04 Oct 2011 23:08:06 +0200
Hi cam
the question is whether WTI oil price or LLS oil price is a better predictor for US GDP growth. So including both simultaneously doesn't really make much sense.
-------- Original-Nachricht --------
> Datum: Tue, 4 Oct 2011 16:57:44 -0400
> Von: Cameron McIntosh <[email protected]>
> An: STATA LIST <[email protected]>
> Betreff: RE: st: Testing to compare goodness of fit
> Also, why not estimate a single model with X and Z jointly predicting Y?
> If the predictors are correlated, it would seem to me that you would need to
> include both of them in the model in order to get unbiased estimates. I
> guess I'm thinking in parallel process growth curve terms (i.e., X, Z and Y
> having co-evolving trajectories)... I'm not sure what framework you're
> implementing this in and what it allows in term of multiple predictors of the
> trend.
> Cam
>
> > Date: Tue, 4 Oct 2011 21:51:07 +0100
> > Subject: Re: st: Testing to compare goodness of fit
> > From: [email protected]
> > To: [email protected]
> >
> > Relying on R-sq alone is not a good idea.
> >
> > Goodness of fit can be compared by
> >
> > 1. Plotting the two sets of predictions in time.
> > 1a. Plotting the two sets of residuals in time.
> >
> > 2. Looking for autocorrelation in residuals.
> >
> > 3. Scatter plots of observed vs predicted in each case.
> > 3a. Residual vs predicted plots.
> >
> > One maxim is never to use a R-sq without inspecting the corresponding
> > scatter plot. Another is that a good model is associated with
> > pattern-free residuals.
> >
> > If the models look equally good, there is likely to be some scientific
> > reason to discriminate between them.
> >
> > Nick
> >
> > On Tue, Oct 4, 2011 at 9:35 PM, <[email protected]> wrote:
> >
> > I have two univariate time series models, both explaining variable Y,
> > one with variable X and one with variable Z as the explanatory
> > variable (plus a constant). Now, both models yield an R-squared that
> > is rather close to each other. Can I really say that model X is better
> > than model Z just by comparing these R-squareds (since with 5
> > observation more or less, things might look different)? Or can I test
> > whether these r-squareds are statistically different from each other?
> > Any other idea to evaluate goodness of fit in that case, except for
> > comparing RMSE? Or is in this case comparing (f-testing) the
> > coefficients of X and Z helpful?
> > *
> > * For searches and help try:
> > * http://www.stata.com/help.cgi?search
> > * http://www.stata.com/support/statalist/faq
> > * http://www.ats.ucla.edu/stat/stata/
>
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
--
NEU: FreePhone - 0ct/min Handyspartarif mit Geld-zurück-Garantie!
Jetzt informieren: http://www.gmx.net/de/go/freephone
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/