Assessing accuracy to me here suggests what is often called assessing
agreement. Concordance correlation is designed to measure agreement.
-search concord- to find a Stata implementation by Thomas Steichen and
myself.
As with anything else, however, you can miss a lot if you try to reduce
assessment to a single measure.
For graphical approaches see
SJ-4-3 gr0005 . . . . . Speaking Stata: Graphing agreement and
disagreement
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . N.
J. Cox
Q3/04 SJ 4(3):329--349 (no
commands)
how to select the right graph to portray comparison or
assessment of agreement or disagreement between data
measured on identical scales
and (if you can get access)
Cox, N.J. 2006. Assessing agreement of measurements and predictions in
geomorphology. Geomorphology 76: 332-346
doi:10.10.16/j.geomorph.2005.12.001
Nick
[email protected]
Ariel Linden (forwarded by Marcello Pagano)
The ROC curve is a wonderful tool for assessing predictive accuracy
when
the outcome is dichotomous, but I would love to get opinions on methods
to
assess accuracy in models using continuous outcome variables (outside of
the r-squared statistic of course). I am thinking along the lines of
mean
absolute percentage error (as is common in time series analysis), or
possibly bootstrapping the difference, but I would love to hear from
others what they think.
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/