Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Comparing two response variables
From
Joerg Luedicke <[email protected]>
To
[email protected]
Subject
Re: st: Comparing two response variables
Date
Tue, 1 Mar 2011 00:18:20 -0500
On Mon, Feb 28, 2011 at 10:32 PM, Debs Majumdar <[email protected]> wrote:
> The data is collected from 10000 students who had answered questions (ordinal
> items) regarding their teachers (500). There were two total scores computed out
> of their responses, one with 5 items and one with 7 (same 5 + additional 2). I
> have been told that this new one (total score out of the 7 items) makes more
> sense qualitatively. I just want to figure out if one can quantify this.
>
So you want to measure a latent variable using 5 or 7 indicator items
(indicators in the sense of factor indicators, not dummies) and you
are wondering whether one version is doing a "better" job in capturing
the phenomenon of interest, right? This is usually know as the
question of construct validity. Assuming that this is the first time
that this particular instrument is used, there is not much you can do
regarding validity. What you can check is stuff like internal
reliability. I would first look at the inter-item correlations: are
the 2 extra items highly correlated with the other 5? How does the
correlation matrix look like? You could then indeed check, as Richard
already suggested, if the alpha is different between the 5 and 7-item
version. You could also fit confirmatory factor analyses for the 2
versions, preferably with varying intercepts across teachers, and see
whether there are differences with respect to model fit. If the 5-item
and 7-item version would perform equally well with respect to all of
that, you could just use the 7-item version if that is slightly
preferred by theoretical intuition. The same would of course apply if
the 7-item version would perform better. The only problem could be if
the 5-item version shows better reliability but theoretically it would
make more sense to take the 7-item version, and that is because the
instrument's reliability does not inform you about its validity:
Whatever is measured by the 5-items is measured well, only that you
cannot be sure what exactly it is that was measured. Or, in other
words, the instrument is only measuring a part of what it was supposed
to measure, but this part is measured well.
J.
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/