Tobias Pfaff wrote:
I am using Intercooled Stata 9.2 on a laptop with an AMD 1.79 GHz
processor
and 512 MB ram. So far, Stata worked well with all datasets.
Now, we are analyzing a larger dataset with 70 variables and 180,000
observations. The dta-file has 224 MB. It takes alone two minutes to
open
the file, not to mention the processing time of simple operations like
-drop- or -replace-. Is that normal? I have tried -compress-, which does
not
have any major impact on the file size.
What is a PROFESSIONAL WAY to handle such a dataset?
1.) Upgrade my computer equipment?
2.) Split up my dataset (which would be a big nuisance for the analysis,
I
think)?
3.) ANY OTHER STATA COMMAND OR TRICK?
-----------------------------------------------------------------
Look at -help memory- if you haven't already done that.
The FAQ http://www.stata.com/support/faqs/data/howbig.html
tells that if your 70 variables are floats, the size of the
data file should be approximately 50 MB. Apparently your dataset
includes some string variables. Dropping or converting them
(see -help encode-) might be useful.
Hope this helps
Svend
__________________________________________
Svend Juul
Institut for Folkesundhed, Afdeling for Epidemiologi
(Institute of Public Health, Department of Epidemiology)
Vennelyst Boulevard 6
DK-8000 Aarhus C, Denmark
Phone: +45 8942 6090
Home: +45 8693 7796
Email: [email protected]
__________________________________________
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/