Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
From | Nick Cox <njcoxstata@gmail.com> |
To | statalist@hsphsun2.harvard.edu |
Subject | Re: st: Testing to compare goodness of fit |
Date | Tue, 4 Oct 2011 21:51:07 +0100 |
Relying on R-sq alone is not a good idea. Goodness of fit can be compared by 1. Plotting the two sets of predictions in time. 1a. Plotting the two sets of residuals in time. 2. Looking for autocorrelation in residuals. 3. Scatter plots of observed vs predicted in each case. 3a. Residual vs predicted plots. One maxim is never to use a R-sq without inspecting the corresponding scatter plot. Another is that a good model is associated with pattern-free residuals. If the models look equally good, there is likely to be some scientific reason to discriminate between them. Nick On Tue, Oct 4, 2011 at 9:35 PM, <tlv101@gmx.net> wrote: I have two univariate time series models, both explaining variable Y, one with variable X and one with variable Z as the explanatory variable (plus a constant). Now, both models yield an R-squared that is rather close to each other. Can I really say that model X is better than model Z just by comparing these R-squareds (since with 5 observation more or less, things might look different)? Or can I test whether these r-squareds are statistically different from each other? Any other idea to evaluate goodness of fit in that case, except for comparing RMSE? Or is in this case comparing (f-testing) the coefficients of X and Z helpful? * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/