>Does anyone have any experience with this? Looking in the archive I
>saw that this problem has been around since 2004 at least... I found
>the suggestion of splitting the dataset into chunks but I doubt it
>could be significantly faster than my naive code above;
Have you tried it?
The last time I did something like this, cutting a large -reshape-
into smaller chunks saved the operating time by more than half.
The time savings will depend on your machine; mine was pushing
the memory limit, hence the work-around.
The obvious tradeoff is the programming you would have to do. But
it's much easier than going through -file-. Safer, too, I would imagine.
Cut them up into a group of 30 varialbes, slap unique identifiers,
-reshape- smaller chucks together, and then -merge- the outputs
from each -reshape- on the identifiers.
Roy
_________________________________________________________________
See how Windows connects the people, information, and fun that are part of your life.
http://clk.atdmt.com/MRT/go/msnnkwxp1020093175mrt/direct/01/
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/