Thanks for your answer.
Let me clarify some things:
> >But how can I test if, for example, the subgroup of x1 is differing
> >across groups. Is
> >
> >testparm g2x1 g3x1 g4x1
> >
> >Is this the correct way to do the test?
>
> If the test statistic is insignificant, then that means that
> the effect of
> x1 does not significantly differ across groups.
>
> >Secondly, can I use the differences between
> >
> >g2x1 <-> g3x1 <-> g4x1
> >
> >to describe the "strength" of the differences (amount of
> difference, in
> >the same meaning as effect strength compared to statistical
> >significance)?
>
> I am not totally sure I follow you, but a command like
>
> testparm g2x1 g3x1 g4x1, equal
>
> would test whether the effects of x1 are the same in groups
> 2, 3, and 4.
First of all, my second question was about something different, I am
going to explain this later, but first a different question:
If you say "testparm g2x1 g3x1 g4x1, equal" tests whether the effects of
x1 are the same in all groups, what is the difference to "testparm g2x1
g3x1 g4x1" which should test the same thing (at least that is what I
figured out and what you confirmed some lines above)?
Or is there a misunderstanding on my part, and "do not differ across
groups" and "the effects are the same in groups" are different?
Finally, what I was actually asking was about something different. Just
a hypothetical example:
Lets assume I have following coefficients:
g2x1 .12
g3x1 .34
g4x1 .37
And I find that they are significantly different among groups (with one
of the tests discussed above, depending on which one is the correct
one).
I also have a second group of coefficients:
g2x2 .08
g3x2 .50
g4x2 .80
Again, differences among groups are statistically significant.
Can I make a meaningful discussion of the values of the differences
between the coefficents, for example claiming that the differences for
x2 are stronger because the differences are larger? I am personally feel
uncomfortable with just talking about statistical significance without
talking about effect strength and in this case the effect strength
should be buried in the difference, but I am not sure if it is correct
to really discuss the differences in this way.
> There are potentially zillions of tests you can be doing
> here, so you want
> to be careful you aren't just capitalizing on chance.
Sure, that's correct. My only problem is that I have four variables that
all could be different across groups, but don't have to (actually I have
more, but I am willing to hold the others constant based on theoretical
assumptions and my research interests).
Or is it better to run several separate regressions and leting only one
of the variables vary across groups while holding the others constant?
In that case I could use the simple testparm command as outlined in the
FAQ to test the differences of a specific variable.
Thanks again for any help,
Daniel Schneider
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/