Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: st: Re: Repeated Measures ANOVA vs. Friedman test
From
"Feiveson, Alan H. (JSC-SK311)" <[email protected]>
To
"[email protected]" <[email protected]>
Subject
RE: st: Re: Repeated Measures ANOVA vs. Friedman test
Date
Tue, 22 May 2012 10:37:09 -0500
One more thing - if you can "get away with" the assumptions for ANOVA and everything is balanced, it should provide closer to nominal test levels than -xtmixed- because of the finite degrees of freedom for error in the former vs. asymptotic Z- statistics in the latter.
Al Feiveson
-----Original Message-----
From: [email protected] [mailto:[email protected]] On Behalf Of Nick Cox
Sent: Tuesday, May 22, 2012 9:48 AM
To: '[email protected]'
Subject: RE: st: Re: Repeated Measures ANOVA vs. Friedman test
I support the general tenor of this thread.
Specifically, -devnplot- (SSC) is, I suggest, likely to be an especially useful plot for this size and kind of data, whether for raw data or for residuals.
Nick
[email protected]
Rob Ploutz-Snyder
I have to agree with Joseph's points below. We (NASA) regularly deal
with small-n given the nature of our research. We are often looking
for longitudinal effects (ex. measures taken pre-flight, during
flight, and post-landing) and sometimes with independent measures
factors as well. Thus a mixed-factorial design. We use ANOVA where
possible, but we also work with -xtmixed-.
I would recommend that you pay close attention to the residuals, as
one overly influential outlier can really impact the model fit, and
thus your inferences. If you find one or more, it's a difficult call
to know whether you really have an unusual observation or just that
your small-n makes it look as though it is an outlier when it's really
just part of the distribution, and you don't have enough observations
to "see" that. Either way, you need to look and then deal with the
consequences.
On Tue, May 22, 2012 at 7:52 AM, Joseph Coveney <[email protected]> wrote:
> Steve wrote:
>
> I was going to compare some data from a pilot study where there were
> repeated measures taken from subjects (1 measurement each at baseline,
> 2 weeks and 4 weeks). I've got a small sample size (n=6 per group)
> and the outcome of interest is a continuous variable. My question is
> whether I can use a repeated measures ANOVA to evaluate such a small
> sample size or whether I should go with Friedman? Or should I use
> something else - a mixed model perhaps?
>
> I did draw histograms and box plots to see what the distributions and
> it looks more or less normally distributed but it's hard to really say
> with such a small sample size. Additionally, the sktest had a value
>>0.05. So is it okay to use RM ANOVA for n of 6 per group?
>
> --------------------------------------------------------------------------------
>
> You say "n=6 per group", which implies that you've got a split-plot design or
> something related to it. How were you going to use Friedman's test, unless your
> hypothesis of interest is solely change over time pooled across both (all?)
> groups?
>
> I'd stick with parametric tests.
>
> 1. If your hypothesis is mainly about group differences, then you can sum across
> times and do a t-test (one-way ANOVA) on the within-person sums. Summing helps
> normalize.
>
> 2. If your hypothesis is mainly about time differences, then do paired t-tests
> against baseline. Differencing helps normalize. (I wouldn't worry about
> multiple comparisons here--this is a pilot study after all.)
>
> 3. If your scientific interest is focused on a group-by-time interaction, then
> rank-based tests become problematic, anyway.
>
> In the spirit of a pilot study, I would not agonize over how the p-values are
> affected by non-normality; the purposes are to get a sense of the quality of the
> data (their properties, characteristics etc.), to decide whether there's any
> point in going further, and if so to get an idea about sample size and whether
> the study design is suitable as-is.
>
> As far as normality goes, I would graph* the residuals to scan for gross
> deviations from normality as part of the background information to consider when
> thinking about the design and inferential methods of the main study.
>
> Joseph Coveney
>
> *Type "help diagnostic_plots" in Stata's command line. Residual plots are part
> of getting a feel for the quality of the data.
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/