With Canadian data, I would ask Milorad Kovacevic from Statistics
Canada as to what the procedure should be. Your data set should have
come up with level 2 weights if the latter are considered to be
relevant by the data collecting agency. The multilevel models, as far
as I know, are sensitive to the highest level weights, in terms of
getting both the point estimates and the standard errors right. As for
the level 1 weights, you basically tweak them to get good estimates of
your "within" estimates, such as the estimate of the level 1 variance;
see http://www.citeulike.org/user/ctacmo/article/711637 which is the
most famous paper on the topic; there's been some other approaches,
however, including Kovacevic's paper, which I don't have at hand.
On 8/18/06, Stefan Kuhle <[email protected]> wrote:
Dear All,
I am working with a large dataset (n=35000) from a nationwide Canadian
survey. I would like to run a 2-level logit model with random slope using
GLLAMM (nesting individuals in their postal code area).
The frequency weights provided with the survey dataset would obviously be my
Level 1 weights. However, I don't quite understand what to use as the level
2 weights in this analysis. GLLAMM's default to set them to 1 doesn't seem
right to me in this case. Instead, I set them to the number of observations
in each postal code area (egen lev2wgt = count(xyz), by(pcarea)) in the
dataset but I'm not sure if this is correct either.
Any suggestions?
Thanks,
Stefan
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
--
Stas Kolenikov
http://stas.kolenikov.name
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/