Statalist


[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]

st: Re: ML estimation and gradient


From   Kit Baum <[email protected]>
To   [email protected]
Subject   st: Re: ML estimation and gradient
Date   Tue, 21 Aug 2007 09:57:32 -0400

In any numerical optimization procedure with numerical derivatives, the derivative is an approximation to the slope along an infinitesimal segment of the function. If you took an arbitrarily small epsilon around the optimum, and the optimum was expressed to a precision beyond 15 digits, you would get something closer to zero. In something like -ml-, applying a tighter convergence criterion (i.e. the norm of the gradient must be no more than 10^-8) you may not find an optimum at all, or it may take a long time. Thus any optimization routine trades off precision for speed and likelihood of convergence. Stata's behavior in this regard is similar to that of any other software I have used.


Kit Baum, Boston College Economics and DIW Berlin
http://ideas.repec.org/e/pba1.html
An Introduction to Modern Econometrics Using Stata:
http://www.stata-press.com/books/imeus.html


On Aug 21, 2007, at 2:33 AM, statalist-digest wrote:


I would like to know why, when using maximum likelihood estimation in Stata,
the gradient in the last iteration is often numerically different from
zero whereas it should be theoretically equal to zero.
*
*   For searches and help try:
*   http://www.stata.com/support/faqs/res/findit.html
*   http://www.stata.com/support/statalist/faq
*   http://www.ats.ucla.edu/stat/stata/



© Copyright 1996–2024 StataCorp LLC   |   Terms of use   |   Privacy   |   Contact us   |   What's new   |   Site index