Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
re: st: Unbalanced repeated measures analysis question
From
"Airey, David C" <[email protected]>
To
"[email protected]" <[email protected]>
Subject
re: st: Unbalanced repeated measures analysis question
Date
Thu, 22 Jul 2010 10:47:41 -0500
.
I posted a message recently pointing to a book for non-statisticians that I like that covers mixed models:
<http://www-personal.umich.edu/~bwest/almmussp.html>
If I can reiterate your design:
You have multiple sessions, let's say at least 3. At each session, the same N subjects return to be measured by a different set of judges, although you have the occasional missing subject at a given session. Each judge measures every subject on a given session with (a) one of 3 novel methods, and (b) the gold standard method. The endpoint measure is the difference between (a) and (b).
It's not clear what your null hypothesis is, as testing equivalence is not the same as testing differences (e.g., <http://www.graphpad.com/library/BiostatsSpecial/article_182.htm>).
Design:
Subject crossed with session: subjects return to each session
Judge crossed with subject: on a given session, every judge measures every subject
Judge nested in session: there are different judges for each session
(?) Judge nested in method: on a given session, a judge uses only one method, plus the gold standard
Method crossed with subject: on a given session, each subject is measured on all three methods
etc.
factors: subject, judge, method, session
subject and judge are random effects
method is a fixed effect
session is a fixed effect
correlations/clustering/repeated measures:
subjects are measured repeatedly over sessions
crossed random effects (subject x judge)
Before trying any analysis, I'd graph the data in any way possible that might help see what is going on, for example, by tying subjects together over session. I'd try to keep the graphs as atomized as possible to show as much off the raw data as possible, as well as making graphs of group means.
If I were in your shoes, depending on the importance of the study, I'd take the design and graphs and a statement of the goals to a statistician, to understand what can be salvaged using mixed models, or if the analysis could be simplified by ignoring (collapsing over) some factors.
Good luck!
> Hi
>
> I have data on measuring a biological property for three different
> methods plus a gold standard. Different people were trained in each
> method (1,2 or 3) and measured the same subjects during different
> sessions, together with the gold standard measurement.
>
> So the data look like
> SubjectID MeasurerID MeasurerType Result GoldStandard Diff
> 1 1 1 95 99 -4
> 1 2 3 102 99 +3
> 1 3 2 92 99 -7
> ...
> 1 10 3 105 99 +6
> 2 1 3 98 100 -2
> ...
>
> Sometimes patients would be called in to see the consultant and so
> missed for a particular measurer, but otherwise all the measurers
> would measure all the patients seen in a particular session. Different
> sets of measurers (but all trained by methods 1,2 or 3) were used on
> each session (individual measurers 1-10 on session 1, 11-20 on session
> 2 etc).
>
> The gold standard measurements on each session are roughly normally
> distributed, as are the differences from the gold standard. We are
> interested in the accuracy of each of the three methods.
>
> Is it OK to do some sort of repeated measures ANOVA here, with an
> unbalanced design? If it is what would be the syntax (Stata 10)? Sorry
> to sound pathetic but I just can't get the anova command with the
> repeated option to work here.
>
> Is there a better measure to use than the difference to reflect the
> fact that we are interested in a comparison with a gold standard?
>
> Thankyou
> Karin
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/