Not to defend mis-specifying equations as a general strategy, but the
issue
of losing degrees of freedom is possibly more serious than implied by
some
of the responses to Alice's question. The loss of degrees of freedom
has a
direct effect on the precision of the parameter estimates.
To turn the (rhetorical?) question around, if your estimates are
unbiased
but completely unreliable, what have you gained? Remember, Alice was
talking about small samples.
One should consider both bias and precision (and their implications for
the
job at hand) when making such decisions.
Indeed, the concept of mean square error trades off bias and precision.
But what is at issue here is not merely bias, it is consistency. Bias
will disappear in the limit. An instrumental variables estimator is
biased in small samples, but if properly specified it will be
consistent, with bias going to zero as N->\infty..
If however you specify the wrong DGP, you will be generating
inconsistent estimates--that is, estimates whose bias does not diminish
with sample size. What useful conclusions can be drawn from
inconsistent estimates? There are some samples with insufficient
information to provide much useful inference. No sensible solution to
that problem involves specifying a demonstrably incorrect DGP.