Riti assures me this is not a homework question. I still think the
UCLA web book is a good place to start; I am not always clear on where
understanding about interpreting the coefs on indicator vars breaks
down...
but here goes:
clear
input score bonus control baseline
0.4 1 0 1
0.2 1 0 1
0.1 1 0 0
0.9 1 0 0
0.5 0 1 1
0.6 0 1 1
0.7 0 1 0
0.76 0 1 0
end
g bonusbaseline=bonus*baseline
g controlbaseline=control*baseline
reg sc bonus bonusb baseline, nohe
reg sc bonus bonusb controlb, nohe
Q: What explains the difference in the
coefficients for bonusbaseline across
regression models?
A: The coef on bonusbaseline measures the increment
due to bonusbaseline, given other coefs.
In the first regression, that's
(mean over obs w/only X2=1)-[
(_cons=mean where all three X=0)+
(mean over only X1=1 less _cons)+
(mean over only X3=1 less _cons)
]=
.3-(.73+(.5-.73)+(.55-.73))=-0.02
Similarly, in the second regression,
the coef on bonusbaseline measures the
diff in means across group 1 defined
by bonus and baseline equal to 1 (obs
1 and 2, mean .3) vs group2 defined by
bonus=1 and baseline equal to 0 (obs 3
and 4, mean .5) which is -0.2
This is all probably easier to see if you:
collapse sc, by(control bonus baseline)
g bonusbaseline=bonus*baseline
g controlbaseline=control*baseline
li, noo clean
reg sc bonus bonusb baseline, nohe
reg sc bonus bonusb controlb, nohe
but no doubt someone else can give a
more accessible interpretation of regression
coefficients.