Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
R: st: RE: St: Panel data imputation
From
"Carlo Lazzaro" <[email protected]>
To
<[email protected]>
Subject
R: st: RE: St: Panel data imputation
Date
Wed, 22 Sep 2010 10:44:50 +0200
Maarten's remarks recall George Box's claim that "All models are wrong ...
some are useful".
David may want to perform some sensitivity analysis on his base case MI, as
recommended in Little RJA, Rubin DB. Statistical Analysis with Missing Data.
2nd ed. Hoboken: Wiley, 2002: 327-330; 335.
Kind Regards,
Carlo
-----Messaggio originale-----
Da: [email protected]
[mailto:[email protected]] Per conto di Maarten buis
Inviato: mercoledì 22 settembre 2010 10.24
A: [email protected]
Oggetto: Re: st: RE: St: Panel data imputation
--- On Tue, 21/9/10, David Bai wrote:
> My impression is that, ignore missing values (default
> approach in Stata), which I assume is listwise approach, has
> been critisized by many researchers, such as Paul Allison,
> because the sample without missing values may end up to be
> very different from the original population.
Funny that you chose Paul Allison of all authors who wrote on
this to support that claim. In his little green Sage book
(Allison 2002), he very much supports listwise deletion as a
way of dealing with missing values, in a way that is very
similar to my own advise: either listwise deletion or invest
lots of time and effort in getting an imputation model right,
and the latter is in many cases just not an option because
the necesarry time and effort just is not available, and in
other cases it is not worth the effort.
Also the fact that there are researchers who critizise a
certain model does not mean that the model is suspect.
Remember that these researchers are telling you things they
just found out. In their enthusiasm for what they just found
out they easily overemphasise the "problems" they just solved.
Alternatively, if I where an economist or some other person
who only looks at the bad side of people (they call themselves
"realists", but we know better)*, then I would say that
researchers need to sell their new thing, and one way of
doing so is to claim that the old thing is bad. There are
strong preasures on researchers to continuously publish new
stuff. They don't have to ly, just strategically emphasize the
"problems" that the new method solves.
The fact that a model is not true is in itself not a problem.
A model is supposed to be a simplification of reality, that is
its very purpose. This necesarily means that models are wrong
(simplification is just another word for "wrong is a somewhat
reasonable way"), so can you always find something wrong with
a model. The question is not whether your model is true, but
whether your model is useful. The problem with MI models is
that they are very sensitive and hard to diagnose, so they
may bring you closer to your population parameters of
interest, but they could just as well bring you further away
from them, and since they are so hard to diagnose it is very
hard to tell which of these two actually happened. That does
not sound like a very useful model to me... Again if you really
know what you are doing than there are special situation where
these models can be useful, but this method is nowhere near
ready for being a "default" method.
Hope this helps,
Maarten
Allison, Paul D. (2002) "Missing Data", Thousand Oaks: Sage.
(*) See also J in:
<http://www.stata.com/statalist/archive/2010-04/msg01234.html>)
--------------------------
Maarten L. Buis
Institut fuer Soziologie
Universitaet Tuebingen
Wilhelmstrasse 36
72074 Tuebingen
Germany
http://www.maartenbuis.nl
--------------------------
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/