Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: xtmelogit: comparing models
From
Stas Kolenikov <[email protected]>
To
[email protected]
Subject
Re: st: xtmelogit: comparing models
Date
Fri, 5 Oct 2012 15:37:50 -0500
I think you would be better off with Wald test that you've outlined.
Likelihood ratio test has a better finite sample performance, but
only when the model is correctly specified. Information criteria are
kind of goofy, given that you have different amount of information
(effective sample sizes) for different parameters in mixed models. The
FAQ you cited is somewhat related: your data are not i.i.d., so
appropriate likelihood ratio testing can get complicated. Wald test is
more robust, and only requires the vce matrix to be correctly
estimated (which is much easier to achieve).
You would need to figure out which levels of your factor variables
were indeed estimated, depending on how exactly the factor variables
were broken down into the baselines. Your -test- command does seem
about right, provided that these are indeed the right levels of group
and condition.
--
-- Stas Kolenikov, PhD, PStat (SSC) :: http://stas.kolenikov.name
-- Senior Survey Statistician, Abt SRBI :: work email kolenikovs at
srbi dot com
-- Opinions stated in this email are mine only, and do not reflect the
position of my employer
On Fri, Oct 5, 2012 at 10:50 AM, Luca Campanelli <[email protected]> wrote:
> Dear Stata users,
> I’d like to fit and compare mixed effects logistic regression models with crossed random effects using the function xtmelogit (Stata 12IC for Windows).
>
> For example (“group” has 2 levels[0,1] and “condition” has 3 levels[1,2,3]):
> (1) xtmelogit resp i.group i.condition , || _all: R.item, covariance(id) || sbj: , covariance(id)
> (2) xtmelogit resp i.group i.condition i.group#i.condition , || _all: R.item, covariance(id) || sbj: , covariance(id)
>
> In comparing two models, I found a big discrepancy between lrtest on one side, and AIC-BIC on the other side. lrtest was highly significant, indicating that (2) was better than (1), while AIC and BIC values were clearly smaller for model (1).
> Which should I trust?
>
> Does this apply to my case http://www.stata.com/support/faqs/statistics/likelihood-ratio-test/ ?
> If yes, how can I do the Wald test?
> Would it be:
> test 1.group#2.condition 1.group#3.condition
>
> Is it correct? I saw others using testparm or lincom.
> I would appreciate any help to understand what the appropriate thing to do is.
>
> thank you,
> Luca
>
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/faqs/resources/statalist-faq/
> * http://www.ats.ucla.edu/stat/stata/
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/