For any treatment effect estimation, you must have a control group to
compare to. Or at least some external variability of the treatment
intensity, like the hours of the new program they were sitting in. Do
you? If you don't, then you can only handwave about how great this
program is, and that the parents should spend even more money for that
nice little school :)). Any good economist would note however that if
you do have any variability in that treatment intensity, it is more
likely to be ndogenous rather than exogenous, and hence even though
you do get some estimates of the effect of an extra hour in the
program (which you couldn't possibly get if everybody gets the same
treatment), those will be biased because both the outcome/error term
in the regression and the treatment intensity are correlated with an
unobserved desire to be treated.
On Wed, Jan 28, 2009 at 12:08 PM, David Airey
<[email protected]> wrote:
> I was interviewing my kids for a small school (200 kids, K-8th grade). The
> school adds changes to its teaching programs to try to alter standardized
> testing scores and also in class performance. They use a set of self-paced
> programs the kids sit at weekly to provide the teachers and principal
> feedback. They plot summary statistics by semester and year and grade to
> discern effects of introducing new programs. I'm not sure what granularity
> of data they below these levels, but I assume they might have weekly
> feedback per student from the self-administered program they use. Anyway, is
> it possible to statistically test for effects of programmatic changes in a
> school this small and essentially 1 class per grade?
--
Stas Kolenikov, also found at http://stas.kolenikov.name
Small print: I use this email account for mailing lists only.
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/