Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
From | Nick Cox <njcoxstata@gmail.com> |
To | statalist@hsphsun2.harvard.edu |
Subject | Re: st: MIXLOGIT: marginal effects |
Date | Tue, 7 Feb 2012 09:00:01 +0000 |
I love logits too and I am not especially an advocate of the linear probability model, but its defence seems simple. It is a model people might want to consider if it fits fairly well over the range of the data, not least because statistical people find probability a useful scale to think on. This can happen when the response proportion doesn't vary very much. Naturally I agree that a logit model might work as well or better in this circumstance. Turn and turn about, most models have absurd implications if you look carefully. In the auto data I just regressed gallons per mile [NB] on weight and I get a positive intercept, which is quite unphysical. If I had a negative intercept, that would have been unphysical too. But I see no need to force the regression through the origin; that's a pragmatic decision. What would you do? Virtually every application of a model involves some absurdity if only as an indirect implication. Even fitting a normal (Gaussian) always means a positive probability of something (physically, biologically, economically, ...) preposterous, but the probability is usually so small that we forget about it or realise that it is not a worry. Mechanics problems old-style sometimes had a facetious edge underlining the art in approximation: An elephant, whose mass may be neglected [taken as zero], .... Most social science models seem to involve approximations even more outrageous (no names, no packdrill). Nick On Tue, Feb 7, 2012 at 7:50 AM, Clive Nicholas <clivelists@googlemail.com> wrote: > Arne Risa Hole replied to Maarten Buis: > >> Thanks Maarten, I take your point, but as Richard says there is >> nothing stopping you from calculating marginal effects at different >> values of the explanatory variables (although admittedly it's rarely >> done in practice). Also the LPM is fine as an alternative to binary >> logit/probit but what about multinomial models? > > I'm coming in on this late, but this is to say two things. I tend to > agree with you over Maarten (whose posts I always read) about the > usefulness of marginal effects and how they should be used (although > Maarten is right that using such statistics as a single summary > measure undermine the whole point of fitting non-linear models). > > However, both of you, IMVHO, are wrong, wrong, wrong about the linear > probability model. There is no justification for the use of this model > _at all_ when regressing a binary dependent variable on a set of > regressors. Pampel's (2000) excellent introduction on logistic > regression spent the first nine or so pages carefully explaining just > why it is inappropriate (imposing linearity on a nonlinear > relationship; predicting values out of range; nonadditivity; etc). > Since when was it in vogue to advocate its usage? I'm afraid that I > don't really understand this. > > Simply put, it's logistic regression or, otherwise, don't bother yourself. > > Pampel FC (2000) Logistic Regression: A Primer (Sage University Papers > Series on QASS, 07-132), Thousand Oaks, CA: Sage * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/