If you need to change the data to get the result you want, you can
take the figures form the thin air, just as well ;). At any rate,
despite the relatively big sample sizes, you may be running into
(empirical) identification problems. Anything than runs longer than
say 20 iterations would sound suspicious to me. You can think of
underidentification as a lack of fit problem: the class of models you
are looking at is not very suitable for your data; the ML optimizer
cannot find any specific configuration of parameters that would make
the model close enough to the data (ot at least closer than other
parameter configurations). You can try to get somewhat better starting
values by running something like -xtlogit- to get some initial
estimates.
On 8/23/06, Prathiba Natesan <[email protected]> wrote:
Thank you for the suggestions Nick.
Could you please tell me how I can change my data in such a case?
Is there a way I can look for anamolies/problems? I am running a 2-PL
MLIRT model on 2 datasets, a simulated dataset and a real data (with a
sample size of abt 19,000). Both of them seem to give me the same
problems.
Also, on #2, even if I let the program run for abt a day or 2, the
program produces about 100-200 iterations of the estimates which look
identical. Sometimes the estimates are identical upto the 3rd or 4th
decimal place. Do you think I should still wait longer for it to
converge?
On a similar note, when the iterations fail to converge, what can be
done? What does it say abt the data?
I am currently working on my dissertation and these questions are
driving me crazy. I would appreciate any suggestions you might have.
Thanks
Prathiba
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
--
Stas Kolenikov
http://stas.kolenikov.name
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/