I still don't understand what you are trying to do. But I can comment on
your code.
You are looping round 40,000 times and writing a single result to 40,000
data files. Then you are looping round to put all those 40,000 data
files in one.
I'd do that directly this way using just one extra file:
clear
set obs 31
gen a = .
tempname out
postfile `out' t using myresults.dta
qui forval i = 1/40000 {
replace a = invnorm(uniform())
ttest a = 0
post `out' (r(t))
}
postclose `out'
I still doubt 40,000 is anywhere big enough to get an answer.
Nick
[email protected]
Victor M. Zammit
* a} The data that I have is from generating random samples of whatever
size,in this case of size 31,from a normally distributed,infinitely
large,
population; ie
local i = 1
while `i'<= 40000 {
drop _all
set obs 31
gen a = invnorm(uniform())
qui ttest a = 0
replace a = r(t) in 1
keep in 1
save a`i',replace
local i = `i'+1
}
* I use 40000 due to memory constraint.Appending the a[i]'s together
gives
me a variable of 40000 observations ,ie
use a1,clear
local i = 2
while `i'<= 40000 {
append using a`i'.dta
local i = `i'+1
}
save ais40000,replace
* b) From ais40000.dta I get the density <= 1.31, presumably to get the
density of 90% , <= 1.697 to get the density of 95% etc etc,according to
the
official ttable, ie
capture program drop density
program define density
use ais40000,clear
count if a<= `1'
di " density >=" "`1'" " = " r(N)/40000
end
density 1.31
density 1.697
density 2.042
density 2.457
density 2.75
* For smaller degrees of freedom,the discrepancy is much higher.I would
like
to know how if it is at all possible to resolve memory constraint .
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/