Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: different standard errors with gllamm vs. xtmelogit
From
"de Vries, Robert" <[email protected]>
To
"'[email protected]'" <[email protected]>
Subject
st: different standard errors with gllamm vs. xtmelogit
Date
Tue, 26 Oct 2010 14:11:40 +0100
Hello everyone. I'm having a weird problem with gllamm and xtmelogit when r= unning a fairly simple 2-level random intercepts model.
The model is predicting a binary health outcome from 1 level 2 variable (me=
angini) and several level 1 variables. It is a sample if 5,4410 people in 1=
6 countries, with 'country' as the level 2 cluster.
The xtmelogit model looks like this:
xi: xtmelogit poorhealth meangini age47 gndr i.education if poorhealth_samp=
le=3D=3D1 || country1 :
(note that the number of integration points is left at the default of 7)
It converges fine on iteration 3 with a log likelihood if -15046.989. The r= esult I am interested in is for meangini and in this model it is -0.030 (SE= =3D 0.035)
The gllamm model is identical (as far as I can tell)
xi: gllamm poorhealth meangini age47 gndr i.education if poorhealth_sample= =3D=3D1, i(country1) nip(7) link(logit) f(binomial)
However the coefficient for the same variable is different (-0.041). And th= e Standard Error (0.0043) is over 8 times smaller.
This is obviously extremely important in interpreting the statistical signi= ficance of the results so I'd appreciate any help anyone might be able to o= ffer as to what's going on.
Cheers
Rob
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/