Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
xtreg aht clnt_* loc_* mon_* addl_* c.day_of_service##c.day_of_service##i.hire_order if filename1 == "Overall" > & filename2 == "Overall" & hire_order ~= . & day_of_service <= 150 [fweight = aht_count], mle fweight not allowed with maximum-likelihood random-effects models r(101); xtreg aht clnt_* loc_* mon_* addl_* c.day_of_service##c.day_of_service##i.hire_order if filename1 == "Overall" > & filename2 == "Overall" & hire_order ~= . & day_of_service <= 150 [aweight = aht_count], mle aweight not allowed with maximum-likelihood random-effects models r(101); It can apparently accommodate importance weights but apparently the weight must be constant within my variable representing each individual employee (empid_client): xtreg aht clnt_* loc_* mon_* addl_* c.day_of_service##c.day_of_service##i.hire_order if filename1 == "Overall" > & filename2 == "Overall" & hire_order ~= . & day_of_service <= 150 [iweight = aht_count], mle weight must be constant within empid_client r(199); I tried to get around this by using the expand feature to duplicate each observation in the dataset by the number of times indicated by the metric count variable. The problem that poses is that these datasets are already very large, often with >500,000 observations at the employee-date so duplicating the observations by the value of the count (which averages in the 30's) creates a dataset that's entirely too big for STATA to handle. Any suggestions? At this point, I'm totally stumped and wondering if there some other technique I can use that produces the same results as a random-effects regression and can utilize frequency weights or analytic weights. Thanks in advance! Best, Mike * * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/