Date: Sat, 27 Aug 2005 12:15:36 -0500
From: "Jun Xu" <[email protected]>
Subject: st: simulate maximum likelihood estimation--where to
construct the random variables
A while ago, Professor Stephen P. Jenkins and Arne Uhlenforff
had exchanges
on this:
Arne Uhlenforff <[email protected]> wants to estimate a MNL
model with a
random intercept using simulated ML (rather than using -gllamm-).
**********
I haven't looked at your program in detail, but one thing
that strikes me is
that I think you are not treating the pseudo-random uniform
draws correctly.
You want 50 replications/draws per ML iteration, but are
creating them anew
every iteration -- you shouldn't. They should be created just
once. (See
e.g. the gory details of the code underlying -mvprobit- or
associated SJ
article.) I don't know if this is having knock-on effects elsewhere.
good luck
Stephen
***********
I am also trying to write some codes on simulated ml as
practice. I did it
in two ways, a first method is to generate 100 random
variables outside the
loop as below; a second method is to create `ed`i' (the
random variable)
anew in each iteration.
Method 1:
forval i = 1/100 {
tempname ed`i'
gen double `ed`i'' = ...whatever ways to generate the random
variable
}
qui forval i = 1/100 {
replace `sp'= `sp' * (invlogit(`xb' + `ed`i'')) if
$ML_y1 == 1
replace `sp'= `sp' * (invlogit(-(`xb' + `ed`i'')))
if $ML_y1 == 0
}
qui replace `lnf' = ln(`sp')
Method 2:
forval i = 1/100 {
tempname ed`i'
}
qui forval i = 1/100 {
gen double `ed`i'' = ...whatever ways to generate the random
variable
replace `sp'= `sp' * (invlogit(`xb' + `ed`i'')) if
$ML_y1 == 1
replace `sp'= `sp' * (invlogit(-(`xb' + `ed`i'')))
if $ML_y1 == 0
}
qui replace `lnf' = ln(`sp')
Strangely enough Method 1 will converge and Method 2 will
never converge. I
am not sure if this is what Professor Jenkins talked about
and why it is so?
I will appreciate any thoughts on this.
There were 2 points raised by Arne Uhlendorff and myself in the earlier
discussion: