Agreed. The "here" was carrying a lot of weight
in my statement. In general, -logit- is surely
better than -regress- for predicting 0-1 variables;
in that context -round()- should be unproblematic.
My theory is that, after recoding to 0/1, the logit and regress approaches
would produce virtually identical results, with the main differences
occurring when the predicted probabilities were very close to .5. I can't
prove this, mind you, but I did try a quick simulation of 1000 cases with a
100 missing values on y and the 0/1 predictions were the same in 99 of the
100 cases. If I had a lot of Xs with a lot of scattered missing data, my
guess is that impute would be far easier to use and would produce very
similar results to doing it the "right" way.