Thanks, Adrian, that's a good idea. I just tried it, and all models had
the same result. That is, the most trimmed model had only the 4 random
effects in it. The vector of coefficients/variances/covariances that was
produced for starting values was not feasible. It was basically 4
coefficients (race/sex-specific intercepts in a no-intercept model), 4
values of .5, and 6 values of 0 (for the covariances).
I also added additional independent variables in a few stages, and got the
same result--infeasible starting values.
Still searching for ideas, any suggestions appreciated.
Thanks.
Sam
On Tue, 20 Apr 2004, Adrian Gonzalez-Gonzalez wrote:
> What about estimating a simpler model with only five (instead of 10)
> explanatory variables. Once you get convergence for the simpler model, keep
> adding explanatory variables, so that you will need to "guess" initial
> values only for the extra variable, one at the time.
>
> Adrian
>
>
> -----Original Message-----
> From: [email protected]
> [mailto:[email protected]] On Behalf Of SamL
> Sent: Tuesday, April 20, 2004 3:14 AM
> To: Stata Listserve
> Cc: [email protected]
> Subject: st: GLLAMM Seeking tricks to get feasible initial values
>
> I am attempting to estimate a complementary log-log model in GLLAMM.
> Despite the following efforts, I have been unable to obtain feasible
> starting values:
>
> 1)I let gllamm come up with the values itself.
>
> 2)I estimated a cloglog model and used the fixed effects as starting
> values, and plugged in other numbers for the covariance terms (many
> different permutations, all to no avail).
>
> 3)Rescaled the interval-level variable independent variable (which ranges
> from -12 to 5) to range from -1.2 to .5, to make it more like to many
> dummy variables in the model (There are four random effects that are
> dummies, no intercept, and about 10 other variables in the model).
>
> 4)I used all zeros for the starting values.
>
> 5)Checked to see if any of my second-level contexts had no variation on
> the dependent variable (one of them had no variation), deleted that
> context, and re-tried all the above.
>
> 6)I estimated a GLLAMM cloglog model fixing the covariances of the (4)
> random terms at zero, and assuming multivariate normality, then tried to
> plug those values into a second GLLAMM run. No success with the first
> model, so the effort fizzled.
>
> This outcome has about 167,000 cases (but not nearly so much, as I have
> collapsed the data and am using frequency weights, so time won't be a
> problem I hope). Of these, about 3,500 are scored 1, and the remainder
> are scored zero. So, an asymmetric model, rather than a logit or probit,
> seems appropriate.
>
> But, I cannot get anything to start iterating. It keeps saying:
>
>
> >overflow at level 1 ( 4111 missing values) NOTE: This # changes
> >initial values not feasible
> >(error occurred in ML computation)
> >(use trace option and check correctness of initial model)
> >finish running on 20 Apr 2004 at 00:00:52
>
>
> I have used the trace option, and nothing looks amiss. It just seems like
> I need feasible starting values. Any suggestions for how to obtain such
> for GLLAMM are greatly appreciated.
>
> Sam
> *
> * For searches and help try:
> * http://www.stata.com/support/faqs/res/findit.html
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
>
> *
> * For searches and help try:
> * http://www.stata.com/support/faqs/res/findit.html
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
>
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/