Maarten,
Thank you for your careful evaluation of the research question and analysis methods. Consistent with your suggestions, our research plan includes evaluation of the study results with ANCOVA, propensity matching, and the heckman method. All of these methods, and more, were used in a similar study of test preparation with different tests, populations, and test preparation behaviors (Powers, D. E. & Rock, D. A. (1998). Effects of coaching on SAT I: Reasoning scores. New York: College Entrance Examination Board.)
As a result of working with existing afqt datasets I am hopeful that the rho for the test prep data will be small (.05 to .10).
________________________________
From: [email protected] on behalf of Maarten buis
Sent: Fri 5/30/2008 1:57 AM
To: [email protected]
Subject: RE: st: Testing for program effectiveness with heckman
--- "Riemer, Richard A CIV DMDC" <[email protected]>
wrote:
> Maarten, Thank you for your reply. I can see the distinction you are
> making. However, I wanted to use heckman because I thought it would
> do a better job at explaining self-selection of test-preparation
> rather than simple moderated regression where there could be
> correlated errors between the two equations. Following the example
> of wage of women, we could say that 'afqt after test prep' is missing
> on sample members who do not engage in test prep and that those
> sample members would have scored lower than average if they would
> have engaged in test prep.
What you are running into is the fundamental problem with causal
analysis. A causal effect can be thought of as a counterfactual
experiment: You want to compare the test score of someone that prepared
for the test with the test score of that same person when (s)he did not
prepare for the test. The problem is that you cannot have a person that
is both prepared and unprepared at the same time. An alternative way of
thingking about this is that you are looking for another person that is
the same in every respect except that that person did not prepare. Such
a person obviously does not exist.
The information we do have is a comparison of groups. You can use
regression / ANOVA to control for other observed variables. An
alternative method of controlling for observed characteristics is
propensity score matching. Some would call these estimates biased
because they expect that students who know that they won't do well are
less likely to prepare, thus leading to an overestimation of the effect
of preparetion; the students who did not prepare are expected to gain
less from preparation then the students that did prepare. I would not
call the group comparisons biased, but I would call them the empirical
information that you use in your model. Once you have presented the
empirical information, you can start adding assumptions, for instance
by using -treatreg-, thus sacrificing empirical content to get closer
to the theoretical concept you are interested in. If anything group
comparisons are, if interpreted correctly as the difference between
groups, likely to be much less biased than so called causal models.
This is not because one type of model is inherently better than the
other, but because the group comparison models try to solve a much
easier problem.
-- Maarten
-----------------------------------------
Maarten L. Buis
Department of Social Research Methodology
Vrije Universiteit Amsterdam
Boelelaan 1081
1081 HV Amsterdam
The Netherlands
visiting address:
Buitenveldertselaan 3 (Metropolitan), room Z434
+31 20 5986715
http://home.fsw.vu.nl/m.buis/
-----------------------------------------
__________________________________________________________
Sent from Yahoo! Mail.
A Smarter Email http://uk.docs.yahoo.com/nowyoucan.html
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
<<winmail.dat>>