|
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: Weights
I really do not know why you find my response "frustrating", and
cannot see why my post should amount to an abuse of the list. Would
you rather have me sugercoat or let all those posters know that their
solutions just did not resolve my particular problem?
As it stands, I am wondering just why packages differ so much in data
handling, and am a little disappointed about Stata in this particular
respect. I do think that is perfectly in line with what other users
would like to hear rather than have me gloss over a problem that has
cost me five days (and counting) and which I imagine is going to crop
up in other users` daily routine as well...
Martin Weiss
Zitat von Michael Blasnik <[email protected]>:
...
I find this response somewhat frustrating to read -- apparently
resolving the problem isn't very important to you but important enough
to ask people's advice on Statalist. If the Stata file is twice the
size of the ASCII CSV file then you are very likely storing some
numbers using bigger variable types than you need. Given the number of
lines and overall size of the file, it seems quite unlikely that the
problem is related to SPSS being able to compress strings. In
addition, there are very few numeric fields that most people use that
actually need double precision storage. If you know so little about
the data contained in the variables that you can't make storage type
decisions then I would guess that you don't need those variables and so
your problem can be solved by just dropping the variables you know
nothing about.
M Blasnik
----- Original Message ----- From: "Martin Weiss"
<[email protected]>
To: <[email protected]>
Sent: Thursday, May 01, 2008 10:02 AM
Subject: RE: st: Weights
Thanks for all replies,
yet the issue is not at all resolved. I had the file broken along columns
overnight and -compress-ed. As there are slightly more than 600 columns, I
went for 7 package @ approx. 90 each. Later, I tried to -merge- them but the
result is almost exactly the same compared to splitting along the rows and
-append-ing. Setting -mem- to 3G, which is perfectly feasible on 64 bit
machines, I managed to -merge- four out of seven. On the fifth run thru the
-forv- loop, Stata complained about lack of -mem-. Adding up the size of the
7 -compress-ed files yields the size of 5.5 G mentioned in the initial post
in this thread. -Compress-ing did not help much, neither did the -makesmall-
prog which was posted in this thread. With regard to -recast, force- that
sets off alarm bells for me as it might destroy valuable information (which
I cannot check for every var as there are over 600 of them...). I do
actually like the peace of mind that comes with -compress-...
Now, I cannot possibly post anything on the data as they are confidential. I
do notice, though, that SPSS 16.0 manages them @ half the file size and @
amazing speed. So far, I have only tried descriptives, but as things stand,
I may not need much more with this dataset. Not willing to go thru the
frustration of learning the ropes in another package, I went 64 bit
precisely to avoid the quandary I am now in.
As things stand, and in contrast to one of the posts earlier, there is
something magic about the way data are stored and processed in different
packages...
Martin Weiss
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/