Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: error with optimize(), d1-d2 evaluators and bhhh technique
From
Christophe Kolodziejczyk <[email protected]>
To
"[email protected]" <[email protected]>
Subject
Re: st: error with optimize(), d1-d2 evaluators and bhhh technique
Date
Fri, 6 Dec 2013 16:53:21 +0100
Great! Thanks a lot. Now I think I can see why It couldn't work. The
BHHH is actually based on the outer-product of the gradient and
therefore has to be computed with the observation level contributions
to the gradient matrix. And the algorithm does not necessarily require
the computation of the gradient analytically, as I mistakenly
believed.
Best regards
Christophe
2013/12/5 Jeff Pitblado, StataCorp LP <[email protected]>:
> Christophe Kolodziejczyk <[email protected]> is trying to use
> -technique(bhhh)- with evaluator type d1:
>
>> I've tried to estimate a linear regression model by MLE. I have used
>> Mata and the optimize() routine. Furthermore I have tried to use the
>> BHHH algorithm, but the program failed, as I got the following error
>> message
>>
>> type d1 evaluators are not allowed with technique bhhh
>> r(111);
>>
>> I do not understand, why I got this error message. The documentation
>> mentions that you cannot use BHHH with a d0, since the hessian is
>> based on the gradient of the likelihood function, which has to be
>> computed analytically. Any idea what I am doing wrong or what i am
>> missing?
>>
>> I have included the code below. The program works with technique nr
>> (but strangely not with BFGS, since it does not find an optimum).
>
> The BHHH technique needs the observation level contributions to the gradient
> matrix, thus cannot work with the any of the d0, d1, or d2 evaluator types.
> Christophe will need to change the evaluator to conform to a gf0, gf1, or
> gf2 evalautor type. Here is what I did to Christophe's code to make that
> happen:
>
> ***** BEGIN:
> clear all
> set matastrict on
>
> mata:
>
> nobs=100000
> a=.25
> b=0.5
> one=J(nobs,1,1)
>
> rseed(10101)
> x=invnormal(uniform(nobs,1))
> v=invnormal(uniform(nobs,1))
> y=a:+x*b+v
>
> z=one,x
>
> void mylnf(
> real scalar todo,
> real rowvector p,
> real matrix x,
> real colvector y,
> real colvector lnf,
> real matrix g,
> real matrix H)
> {
> real scalar nobs
> real scalar k
> real vector beta
> real scalar lsigma
> real scalar sigma
> real vector u
>
> nobs = rows(x)
> k = cols(x)
> beta = p[.,(1..k)]'
> lsigma = p[1,k+1]
> sigma = exp(lsigma)
> u = (y-x*beta)/sigma
>
> lnf = -lsigma:-0.5*u:^2
>
> if (todo) {
> g = J(nobs,k+1,0)
> g[.,1..k] = u:*x/sigma
> g[.,k+1] = -1 :+ u:*u
> if (todo > 1) {
> H[1..k,1..k] = -cross(x,x)/(sigma^2)
> H[k+1,k+1] = -2*cross(u,u)
> H[k+1,1..k] = cross(u,x)
> _makesymmetric(H)
> }
> }
> }
>
> S=optimize_init()
> optimize_init_evaluator(S, &mylnf())
> optimize_init_evaluatortype(S, "gf1")
> optimize_init_params(S, runiform(1,3))
> optimize_init_argument(S,1,z)
> optimize_init_argument(S,2,y)
> optimize_init_technique(S,"bhhh")
> p=optimize(S)
> optimize_result_params(S)
> optimize_result_V(S)
> end
> ***** END:
>
> --Jeff
> [email protected]
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/faqs/resources/statalist-faq/
> * http://www.ats.ucla.edu/stat/stata/
--
Christophe Kolodziejczyk
Research Fellow
AKF, Anvendt KommunalForskning
Danish Institute of Governmental Research
Købmagergade 22
DK-1150 København K
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/