Joseph Coveney <[email protected]> wants to add a function of the Hessian
to the likelihood value:
> I would like to penalize the likelihood with a function of the negative
> Hessian. How can I have the evaluator program for use with ml method -lf-
> see the most recent Hessian during each iteration? I know that -ml
> maximize- has it available somewhere, because -ml maximize, hessian- reports
> it during iterations. But a log-likelihood evaluator program (one called
> by -ml model lf-) cannot simply put -matrix `penalty' = some function of
> syminv(e(V))-, because -ml maximize- doesn't post it or update it until the
> end of the run.
>
> I've tried
>
> ml model lf , , ,
> ml maximize, iterate(1)
> local not_yet 1
> while (`not_yet') {
> matrix define B = e(b)
> ml model lf . . .
> ml init B
> ml maximize, search(off) iterate(1) nooutput
> local not_yet = cond(e(converged), 0, `not_yet')
> }
> ml display
>
> but there are at least two iterations (zero and one) for each call to -ml
> maximize , iterate(1)-, each iteration calling the likelihood evaluator
> several times, and e(V) doesn't get updated within the -ml maximize- call
> before the convergence criterion is met.
I believe this will require a change to Joseph's likelihood evaluator in
addition to the following changes to his example above.
tempname penalty nH ll1 ll2 rd
local tol = 1e-6
scalar `penalty' = 0
global MY_penalty `penalty'
ml model lf ...
ml maximize, nooutput noclear
scalar `ll1' = e(ll)
local not_yet 1
while (`not_yet') {
matrix `nH' = syminv(e(V))
scalar `penalty' = ... function of `nH' ...
ml maximize, search(off) noclear nooutput
scalar `ll2' = e(ll)
scalar `rd' = reldif(`ll1', `ll2')
local not_yet = `rd' > `tol'
}
ml max, search(off)
Now all we need to do is change the likelihood evaluator to recognize whether
$MY_penalty is defined and add it to the log likelihood calculation.
program mylfeval
args lnf ...
...
if "$ML_penalty" != "" {
quietly replace `lnf' = `lnf' + $ML_penalty
}
end
If we want the penalty to be added to the overall likelihood only once, then
we would code the value in $ML_penalty to be equal to the average penalty over
the estimation sample.
Technical note:
Adding a function of the Hessian as a penalty to the likelihood
calculation would invalidate the type 'lf' evaluators numerical
assumptions. I believe -ml model lf- would have a hard time computing
the numerical derivatives for something like this.
Notice that the standard errors from the above solution do not take
the penalty into account. Thus the above is only useful to Joseph if
he doesn't care about the penalty's influence on the standard errors.
Otherwise we have a chicken and egg problem. The Hessian is a
function of the log likelihood, but we want the log likelihood
calculation to be a function of the Hessian. If we were to fix the
penalty to only be a function of the unpenalized log-likelihood
function, then we would be able to get the adjusted standard errors if
we could write down the functional form of the penalized likelihood.
This would most likely require a type 'd0' evaluator, since the
penalty invalidates the type 'lf' numerical assumptions.
--Jeff
[email protected]
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/