Hi Josh,
Are you using Stata 8 and the latest version of gllamm?
This is at least twice as fast as running the latest
version of gllamm in Stata 7.
You have a large dataset and a large model (four random
effects), so I'm not surprised it's slow. Are the responses
dichotomous? In that case you may be able to collapse
the data considerably, see gllamm manual for examples.
I would start with a lower-level model, fewer quadrature
points (nip option), fewer covariates, or a smaller dataset
and then assess how much slower it will get as you increase these.
The time it takes is roughly proportional to:
* N total number of observations
* p, number of parameters squared
* n^M, where n is the number of quadrature points and M
is the number of random effects
==> biggest gain is to reduce M, followed by n, p and N
If you estimate a simpler model first, you can use these
estimates as starting values for the more complex model
using
matrix a=e(b)
gllamm ..., from(a) ...
I would be interested to hear if you can make this work!
Best wishes,
Sophia
----- Original Message -----
From: <[email protected]>
To: <[email protected]>
Sent: Thursday, May 29, 2003 3:26 PM
Subject: st: gllamm
> Hi
>
> Any ideas on ways to speed up gllamm procedures? I ran one last night and
it
> took 6 hours and still was not finished. The manual for gllamm has a
formula
> for figuring out how fast it will run, but the new version of stata is
> supposed to work with gllamm much more quickly. This is a panel dataset
with
> approximately 100000 cases. The model is a four level model.
>
> Josh
>
> --
> Joshua D. Hawley
> [email protected]
>
>
>
>
> *
> * For searches and help try:
> * http://www.stata.com/support/faqs/res/findit.html
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/