Title | Chow tests | |
Author | William Gould, StataCorp |
Privately I was asked yet another question on Chow tests. The question started out “Is a Chow test the correct test to determine whether data can be pooled together?” and went on from there.
In the past, I have always given in and cast my answer in Chow-test terms. In this reply, I try a different approach and, I think, the result is more useful.
This reply concerns linear regression (though the technique is really more general than that), and I gloss over the detail of pooling the residuals and whether the residual variances are really the same. For the last, I think I can be forgiven.
Here is what I wrote:
Is a Chow test the correct test to determine whether data can be pooled together?
A Chow test is simply a test of whether the coefficients estimated over one group of the data are equal to the coefficients estimated over another, and you would be better off to forget the word Chow and remember that definition.
History: In the days when statistical packages were not as sophisticated as they are now, testing whether coefficients were equal was not so easy. You had to write your own program, typically in FORTRAN. Chow showed a way you could perform a Wald test based on statistics that were commonly reported, and that would produce the same result as if you performed the Wald test. |
What does it mean “whether data can be pooled together”? Do you often meet nonprofessionals who say to you, “I was wondering whether the data could be pooled?” Forget that phrase, too: it is another piece of jargon for testing whether the behavior is the same, as measured by whether the coefficients are the same.
Let’s pretend that you have some model and two or more groups of data. Your model predicts something about the behavior within the group based on certain characteristics that vary within the group. Under the assumption that each group's behavior is unique, you have
y_1 = X_1*b_1 + u_1 (equation for group 1) y_2 = X_2*b_2 + u_2 (equation for group 2)
and so on. Now you want to test whether the behavior for one group is the same as for another, which means you want to test
b_1 = b_2 = ...
How do you do that? Testing coefficients across separately fitted models is difficult to impossible, depending on things we need not go into right now. A trick is to “pool” the data to convert the multiple equations into one giant equation
y = d1*(X_1*b1 + u1) + d2*(X_2*b2 + u2) + ...
where y is the set of all outcomes (y_1, y_2, ...), and d1 is a variable that is 1 when the data are for group 1 and 0 otherwise, d2 is 1 when the data are for group 2 and 0 otherwise, ....
Notice that from the above I can retrieve the original equations. Setting d1=1 and d2=d3=...=0, I get the equation for group 1; setting d1=0 and d2=1 and d3=...=0, I get the equation for group 2; and so on.
Now let’s start with
y = d1*(X_1*b1 + u1) + d2*(X_2*b2 + u2) + ...
and rewrite it by a little algebraic manipulation:
y = d1*(X_1*b1 + u1) + d2*(X_2*b2 + u2) + ... = d1*X_1*b1 + d1*u1 + d2*X_2*b2 + d2*u2 + ... = d1*X_1*b1 + d2*X_2*b2 + ... + d1*u1 + d2*u2 + ... = X_1*d1*b1 + X_2*d2*b2 + ... + d1*u1 + d2*u2 + ... = (X_1*d1)*b1 + (X_2*d2)*b2 + ... + d1*u1 + d2*u2 + ...
By stacking the data, I can obtain estimates of b1, b2, ...
I include not X_1 in my model, but X_1*d1 (a set of variables equal to X_1 when group is 1 and 0 otherwise); I include not X_2 in my model, but X_2*d2 (a set of variables equal to X_2 when group is 2 and 0 otherwise); and so on.
Let’s use auto.dta and pretend that I have two groups.
. sysuse auto . generate group1=rep78==3 . generate group2=group1==0
I could fit the models separately:
. regress price mpg weight if group1==1
Source | SS df MS | Number of obs = 30 | |
F( 2, 27) = 16.20 | |||
Model | 196545318 2 98272658.8 | Prob > F = 0.0000 | |
Residual | 163826398 27 6067644.36 | R-squared = 0.5454 | |
Adj R-squared = 0.5117 | |||
Total | 360371715 29 12426610.9 | Root MSE = 2463.3 |
price | Coefficient Std. err. t P>|t| [95% conf. interval] | |
mpg | 13.14912 184.5661 0.07 0.944 -365.5492 391.8474 | |
weight | 3.517687 1.015855 3.46 0.002 1.433324 5.60205 | |
_cons | -5431.147 6599.898 -0.82 0.418 -18973.02 8110.725 | |
Source | SS df MS | Number of obs = 44 | |
F( 2, 41) = 5.16 | |||
Model | 54562909.6 2 27281454.8 | Prob > F = 0.0100 | |
Residual | 216614915 41 5283290.61 | R-squared = 0.2012 | |
Adj R-squared = 0.1622 | |||
Total | 271177825 43 6306461.04 | Root MSE = 2298.5 |
price | Coefficient Std. err. t P>|t| [95% conf. interval] | |
mpg | -170.5474 93.3656 -1.83 0.075 -359.103 18.0083 | |
weight | .0527381 .8064713 0.07 0.948 -1.575964 1.68144 | |
_cons | 9685.028 4190.693 2.31 0.026 1221.752 18148.3 | |
I could fit the combined model:
. generate mpg1=mpg*group1 . generate weight1=weight*group1 . generate mpg2=mpg*group2 . generate weight2=weight*group2 . regress price group1 mpg1 weight1 group2 mpg2 weight2, noconstant
Source | SS df MS | Number of obs = 74 | |
F( 6, 68) = 91.38 | |||
Model | 3.0674e+09 6 511232168 | Prob > F = 0.0000 | |
Residual | 380441313 68 5594725.19 | R-squared = 0.8897 | |
Adj R-squared = 0.8799 | |||
Total | 3.4478e+09 74 46592355.7 | Root MSE = 2365.3 |
price | Coefficient Std. err. t P>|t| [95% conf. interval] | |
group1 | -5431.147 6337.479 -0.86 0.394 -18077.39 7215.096 | |
mpg1 | 13.14912 177.2275 0.07 0.941 -340.5029 366.8012 | |
weight1 | 3.517687 .9754638 3.61 0.001 1.571179 5.464194 | |
group2 | 9685.028 4312.439 2.25 0.028 1079.69 18290.37 | |
mpg2 | -170.5474 96.07802 -1.78 0.080 -362.2681 21.17334 | |
weight2 | .0527381 .8299005 0.06 0.950 -1.603303 1.708779 | |
What is this noconstant option? We must remember that when we fit the separate models, each has its own intercept. There was an intercept in X_1, X_2, and so on. What I have done above is literally translate
y = (X_1*d1)*b1 + (X_2*d2)*b2 + d1*u1 + d2*u2
and included the variables group1 and group2 (variables equal to 1 for their respective groups) and told Stata to omit the overall intercept.
I do not recommend you fit the model the way I have just illustrated because of numerical concerns—we will get to that later. Fit the models separately or jointly, and you will get the same estimates for b_1 and b_2.
Now we can test whether the coefficients are the same for the two groups:
. test _b[mpg1]=_b[mpg2], notest ( 1) mpg1 - mpg2 = 0 . test _b[weight1]=_b[weight2], accum ( 1) mpg1 - mpg2 = 0 ( 2) weight1 - weight2 = 0 F( 2, 68) = 5.61 Prob > F = 0.0056
That is the Chow test. Something was omitted: the intercept. If we really wanted to test whether the two groups were the same, we would would test
. test _b[mpg1]=_b[mpg2], notest ( 1) mpg1 - mpg2 = 0 . test _b[weight1]=_b[weight2], accum notest ( 1) mpg1 - mpg2 = 0 ( 2) weight1 - weight2 = 0 . test _b[group1]=_b[group2], accum ( 1) mpg1 - mpg2 = 0 ( 2) weight1 - weight2 = 0 ( 3) group1 - group2 = 0 F( 3, 68) = 4.07 Prob > F = 0.0102
Using this approach, however, we are not tied down by what the “Chow test” can test. We can formulate any hypothesis we want. We might think that mpg works the same way in both groups but that weight works differently, and each group has its own intercept. Then we could test
. test _b[mpg1]=_b[mpg2] ( 1) mpg1 - mpg2 = 0 F( 1, 68) = 0.83 Prob > F = 0.3654
by itself. If we had more variables, we could test any subset of variables.
Is “pooling the data” justified? Of course it is: we just established that pooling the data is just another way of fitting separate models and that fitting separate models is certainly justified—we got the same coefficients. That’s why I told you to forget the phrase about whether pooling the data is justified. People who ask that do not really mean to ask what they are saying; they mean to ask whether the coefficients are the same. In that case, they should say that. Pooling is always justified, and it corresponds to nothing more than the mathematical trick of writing separate equations,
y_1 = X_1*b_1 + u_1 (equation for group 1) y_2 = X_2*b_2 + u_2 (equation for group 2)
as one equation
y = (X_1*d1)*b1 + (X_2*d2)*b2 + d1*u1 + d2*u2
There are many ways I can write the above equation, and I want to write it a little differently because of numerical concerns. Starting with
y = (X_1*d1)*b1 + (X_2*d2)*b2 + d1*u1 + d2*u2
let’s do some algebra to obtain
y = X*b1 + d2*X_2*(b2-b1) + d1*u1 + d2*u2
where X = (X_1, X_2). In this formulation, I measure not b1 and b2, but b1 and (b2−b1). This is numerically more stable, and I can still test that b2==b1 by testing whether (b2−b1)=0.
Let’s fit this model
. regress price mpg weight mpg2 weight2 group2
Source | SS df MS | Number of obs = 74 | |
F( 5, 68) = 9.10 | |||
Model | 254624083 5 50924816.7 | Prob > F = 0.0000 | |
Residual | 380441313 68 5594725.19 | R-squared = 0.4009 | |
Adj R-squared = 0.3569 | |||
Total | 635065396 73 8699525.97 | Root MSE = 2365.3 |
price | Coefficient Std. err. t P>|t| [95% conf. interval] | |
mpg | 13.14912 177.2275 0.07 0.941 -340.5029 366.8012 | |
weight | 3.517687 .9754638 3.61 0.001 1.571179 5.464194 | |
mpg2 | -183.6965 201.5951 -0.91 0.365 -585.9733 218.5803 | |
weight2 | -3.464949 1.280728 -2.71 0.009 -6.020602 -.9092956 | |
group2 | 15116.17 7665.557 1.97 0.053 -180.2075 30412.56 | |
_cons | -5431.147 6337.479 -0.86 0.394 -18077.39 7215.096 | |
and, if I want to test whether the coefficients are the same, I can do
. test _b[mpg2]=0, notest ( 1) mpg2 = 0 . test _b[weight2]=0, accum ( 1) mpg2 = 0 ( 2) weight2 = 0 F( 2, 68) = 5.61 Prob > F = 0.0056
and that gives the same answer yet again. If I want to test whether *ALL* the coefficients are the same (including the intercept), I can type
. test _b[mpg2]=0, notest ( 1) mpg2 = 0 . test _b[weight2]=0, accum notest ( 1) mpg2 = 0 ( 2) weight2 = 0 . test _b[group2]=0, accum ( 1) mpg2 = 0 ( 2) weight2 = 0 ( 3) group2 = 0 F( 3, 68) = 4.07 Prob > F = 0.0102
Just as before, I can test any subset.
Using this difference formulation, if I had three groups, starting with
y = (X_1*d1)*b1 + (X_2*d2)*b2 + (X_3*d3)*b3 + d1*u1 + d2*u2 + d3*u3
as
y = X*b1 + (X_2*d2)*(b2-b1) + (X_3*d3)*(b3-b1) + d1*u1 + d2*u2 + d3*u3
Let’s create the group variables and fit this model:
. sysuse auto, clear . generate group1=rep78==3 . generate group2=rep78==4 . generate group3=(group1+group2)==0 . generate mpg1=mpg*group1 . generate weight1=weight*group1 . generate mpg2=mpg*group2 . generate weight2=weight*group2 . generate mpg3=mpg*group3 . generate weight3=weight*group3 . regress price mpg weight mpg2 weight2 group2 mpg3 weight3 group3
Source | SS df MS | Number of obs = 74 | |
F( 8, 65) = 5.80 | |||
Model | 264415585 8 33051948.1 | Prob > F = 0.0000 | |
Residual | 370649811 65 5702304.78 | R-squared = 0.4164 | |
Adj R-squared = 0.3445 | |||
Total | 635065396 73 8699525.97 | Root MSE = 2387.9 |
price | Coefficient Std. err. t P>|t| [95% conf. interval] | |
mpg | 13.14912 178.9234 0.07 0.942 -344.1855 370.4837 | |
weight | 3.517687 .9847976 3.57 0.001 1.55091 5.484463 | |
mpg2 | 130.5261 336.6547 0.39 0.699 -541.8198 802.872 | |
weight2 | -2.18337 1.837314 -1.19 0.239 -5.85274 1.486 | |
group2 | 4560.193 12222.22 0.37 0.710 -19849.27 28969.66 | |
mpg3 | -194.1974 216.3459 -0.90 0.373 -626.27 237.8752 | |
weight3 | -3.160952 1.73308 -1.82 0.073 -6.622152 .3002481 | |
group3 | 14556.66 9167.998 1.59 0.117 -3753.101 32866.41 | |
_cons | -5431.147 6398.12 -0.85 0.399 -18209.07 7346.781 | |
If I want to test whether the three groups were the same in the Wald-test sense, I can type
. test (_b[mpg2]=0) (_b[weight2]=0) (_b[group2]=0) > (_b[mpg3]=0) (_b[weight3]=0) (_b[group3]=0) ( 1) mpg2 = 0 ( 2) weight2 = 0 ( 3) group2 = 0 ( 4) mpg3 = 0 ( 5) weight3 = 0 ( 6) group3 = 0 F( 6, 65) = 2.28 Prob > F = 0.0463
I could more easily type the above command as
. testparm mpg2 weight2 group2 mpg3 weight3 group3 ( 1) mpg2 = 0 ( 2) weight2 = 0 ( 3) group2 = 0 ( 4) mpg3 = 0 ( 5) weight3 = 0 ( 6) group3 = 0 F( 6, 65) = 2.28 Prob > F = 0.0463
Alternatively, we can use factor variables and contrast to perform the same test:
. generate group=cond(rep78==3,1,cond(rep78==4,2,3)) . regress price c.mpg##i.group c.weight##i.group
Source | SS df MS | Number of obs = 74 | |
F( 8, 65) = 5.80 | |||
Model | 264415585 8 33051948.1 | Prob > F = 0.0000 | |
Residual | 370649811 65 5702304.78 | R-squared = 0.4164 | |
Adj R-squared = 0.3445 | |||
Total | 635065396 73 8699525.97 | Root MSE = 2387.9 |
price | Coefficient Std. err. t P>|t| [95% conf. interval] | |
mpg | 13.14912 178.9234 0.07 0.942 -344.1855 370.4837 | |
group | ||
2 | 4560.193 12222.22 0.37 0.710 -19849.27 28969.66 | |
3 | 14556.66 9167.998 1.59 0.117 -3753.101 32866.41 | |
group#c.mpg | ||
2 | 130.5261 336.6547 0.39 0.699 -541.8198 802.872 | |
3 | -194.1974 216.3459 -0.90 0.373 -626.27 237.8752 | |
weight | 3.517687 .9847976 3.57 0.001 1.55091 5.484463 | |
group#c.weight | ||
2 | -2.18337 1.837314 -1.19 0.239 -5.85274 1.486 | |
3 | -3.160952 1.73308 -1.82 0.073 -6.622152 .3002481 | |
_cons | -5431.147 6398.12 -0.85 0.399 -18209.07 7346.781 | |
df F P>F | ||
group | 2 1.29 0.2835 | |
group#c.mpg | 2 0.78 0.4617 | |
group#c.weight | 2 1.88 0.1602 | |
Overall | 6 2.28 0.0463 | |
Denominator | 65 | |