Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: RE: Mean test in a Likert Scale
From
Rob Ploutz-Snyder <[email protected]>
To
[email protected]
Subject
Re: st: RE: Mean test in a Likert Scale
Date
Fri, 31 Aug 2012 13:50:50 -0500
David,
Thanks for directing me to this interesting article....
In reading their literature review, they cite evidence both for and
against labeling each response choice, and I suspect that as with many
of these sorts of topics, there are probably highly qualified social
scientists in both camps. Nevertheless, their own data supports their
hypothesis that reliability is increased with extensive labeling
versus only labeling the endpoints--i.e. the two examples that I
provided--and they provide strong evidence. It's a well designed
study authored by faculty from two top-notch University Psychology
programs, published in a respectable methods journal...good stuff.
They base the hypothesis that labeled-scales would improve reliability
on the notion that error variance is at least partly due to ambiguity
in the wording of questions & answer scales, and the idea that
respondents have an innate tendency to want to "explain" their
answers, so that they postulate that labels reduce ambiguity of answer
choices and/or provide a default "explanation" from respondents.
It is a compelling study that I will have to re-read a few times, as
it goes against my intuition and training.
Rob
On Fri, Aug 31, 2012 at 11:32 AM, David Radwin <[email protected]> wrote:
> Rob,
>
> It may be the case that not labeling the middle points of a scale, as in
> your first example, justifies the assumption of equal spacing (deltas).
> But the literature suggests that verbally labeling all points on a scale,
> as in your second example, leads to more reliable measurement. See, for
> example:
>
> Alwin DF, Krosnick JA. 1991. The reliability of survey attitude
> measurement: The influence of question and respondent attributes. Sociol.
> Methods Res. 20:139-81.
> http://deepblue.lib.umich.edu/bitstream/2027.42/68969/2/10.1177_0049124191
> 020001005.pdf
>
> David
> --
> David Radwin
> Senior Research Associate
> MPR Associates, Inc.
> 2150 Shattuck Ave., Suite 800
> Berkeley, CA 94704
> Phone: 510-849-4942
> Fax: 510-849-0794
>
> www.mprinc.com
>
>
>> -----Original Message-----
>> From: [email protected] [mailto:owner-
>> [email protected]] On Behalf Of Rob Ploutz-Snyder
>> Sent: Friday, August 31, 2012 9:13 AM
>> To: [email protected]
>> Subject: Re: st: RE: Mean test in a Likert Scale
>>
>> My 2 cents...when designing these sorts of instruments...
>>
>> I was trained that a true likert scale doesn't label each of the
>> points in the 5-point (or other) scale, but instead has only TWO
>> labels at each extreme. For example:
>>
>> I like Statalist.............. Completely Disagree 1 2 3 4
>> 5 Completely Agree
>>
>> This is in CONTRAST to a scale that would label each and every point
>> (sometimes called "likert-type" or "modified-likert") for example:
>>
>> 1=completely disagree
>> 2=disagree
>> 3=neutral
>> 4=agree
>> 5=completely agree
>>
>> With true likert scales, while still not continuous in scale, the
>> distance between each category in a true likert scale is not
>> subjective. The delta between "1" and "2" is the same as the delta
>> between "2" and "3" etc. and it is assumed that survey respondents
>> can appreciate this. The same cannot be assumed about the difference
>> between "completely disagree" and "disagree" being equal to the delta
>> between "disagree" and "neutral."
>>
>> So in that way, a true-likert scale removes some of the subjectivity
>> on the deltas and seems to achieve a more proper ordinal scale as
>> opposed to purely categorical.
>>
>> Still doesn't justify using parametric statistical techniques...
>> However, most well-vetted Sociology or Psychological instruments are
>> designed to use multiple questions that, together, are used to measure
>> a particular construct. Social scientists don't usually intend to
>> compare responses on single questions, but instead ask many questions
>> that cluster together, often verified by exploratory or confirmatory
>> factor analysis, where "factor scores" are then created to capture the
>> overall construct of interest. These factor scores can be derived by
>> different methods, the simplest being a mean of the items that cluster
>> together, but usually by more sophisticated regression-based methods
>> that weigh each item according to how well it correlates with the
>> overall factor structure. These factor scores are continuously
>> scaled, unlike the individual items that were used to derive them, and
>> it is these factor scores that are often analyzed by various
>> parametric statistical techniques.
>>
>> Whether or not the factor scores are normally distributed in the
>> population (the real question) is dependent on the particulars of each
>> research study, but I don't categorically deny that the assumption is
>> invalid.
>
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/