Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: RE: Re: pwcompare
From
Janet Hill <[email protected]>
To
"[email protected]" <[email protected]>
Subject
Re: st: RE: Re: pwcompare
Date
Sun, 27 Nov 2011 20:23:57 +0000 (GMT)
Tony,
Thanks for that. I will go back to West, Welch & Galecki to try and improve my understanding of the underlying theory.
I am now totally confused as to how to compare the groups after using xtmixed in a repeated measures scenario, can anybody offer me any suggestions or advice?
Thank you,
Janet
----- Original Message -----
From: "Lachenbruch, Peter" <[email protected]>
To: "[email protected]" <[email protected]>
Cc:
Sent: Saturday, 26 November 2011, 18:44
Subject: st: RE: Re: pwcompare
I didn't see an answer to this. I suspect it is related to the fact that the standard errors in the xtmixed model account for the random effects, and so are larger. Note that the contrasts are the same in both as they are the sample means.
Tonyh
________________________________________
From: [email protected] [[email protected]] On Behalf Of Janet Hill [[email protected]]
Sent: Friday, November 25, 2011 8:35 AM
To: [email protected]
Subject: st: Re: pwcompare
Why does pwcompare give different results after anova and xtmixed? Using the following syntax:
xtmixed lhist i.group##i.time || dog:, reml cov(id)
or
anova lhist group / dog|group time time#group, repeated(time)
followed by
pwcompare group, emptycells(reweigh) eff asbalanced post
I get the following for xtmixed
------------------------------------------------------------------------------
| Unadjusted Unadjusted
| Contrast Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
lhist |
group |
2 vs 1 | -.4591643 .5681373 -0.81 0.419 -1.572693 .6543643
3 vs 1 | 1.200595 .525993 2.28 0.022 .1696677 2.231522
4 vs 1 | -.3857097 .525993 -0.73 0.463 -1.416637 .6452175
3 vs 2 | 1.659759 .5681373 2.92 0.003 .5462306 2.773288
4 vs 2 | .0734546 .5681373 0.13 0.897 -1.040074 1.186983
4 vs 3 | -1.586305 .525993 -3.02 0.003 -2.617232 -.5553775
------------------------------------------------------------------------------
and for the anova
------------------------------------------------------------------------------
| Unadjusted Unadjusted
| Contrast Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
group |
2 vs 1 | -.4591643 .1047385 -4.38 0.000 -.6722564 -.2460723
3 vs 1 | 1.200595 .096969 12.38 0.000 1.00331 1.39788
4 vs 1 | -.3857097 .096969 -3.98 0.000 -.5829947 -.1884248
3 vs 2 | 1.659759 .1047385 15.85 0.000 1.446667 1.872851
4 vs 2 | .0734546 .1047385 0.70 0.488 -.1396375 .2865467
4 vs 3 | -1.586305 .096969 -16.36 0.000 -1.78359 -1.38902
------------------------------------------------------------------------------
I can see that the standard errors are different, but the question arises should I use xtmixed or anova for repeated measures?
I am using Stata 12.0, update 10 Nov 2011.
Thanks,
Janet
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/