Jake wrote:
>>For pedagogical reasons, I am looking to solve for the maximum value of a normal likelihood.� (I realize that this can be solved algebraically, but again, this is merely for pedagogical reasons.)� I am using a Newton-Raphson algorithm.� (ie.� New-estimate = old-estimate - (1st derivative / 2nd derivative))� But the algorithm will not converge.� Instead, it simply cycles back and forth between two non-solutions.� Is it not possible to solve for the maximum of a normal likelihood using the Newton-Raphson algorithm?>>
To make any good comment we would need to see the log-likelihood and data you are working.
If you are working with the Gaussian and the model is linear in the parameters, NR corresponds to the algebraic solution and should converge in one step. This is because the normal log-likelihood is quadratic and NR is making successive optimal quadratic approximations to the function being optimized. If the model is not linear, e.g., a Gaussian mixed model, iteration will be needed.
Aside: There is some evidence to suggest Gauss reverse-engineered the Gaussian to solve the least squares optimization problem, though de Moivre's use of the same function as an approximation to the binomial was almost a century old then. Least absolute deviation, i.e., median regression, had been used previously, but rigorous backing for it was lacking, it involved a lot of heuristic trial and error (no linear programming then) and it often generates multiple solutions in small datasets. See Steve Stigler's book Statistics on the Table.
JV
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/