Title | Use of ml for nonlinear model | |
Authors | Weihua Guan, Gustavo Sanchez, StataCorp |
Consider the model
y = f(x) + e
where y is the outcome, f(x) is a nonlinear form of covariate x, and e is the random error. The command nl will estimate the parameters of f(x) by using least squares. Here is an example:
. sysuse auto (1978 automobile data) . nl (rep78 = {b0}*(1-exp(-{b1}*headroom))), initial(b0 1 b1 0.1) nolog
Source | SS df MS | Number of obs = 69 | |
Model | 800.67379 2 400.336893 | R-squared = 0.9235 | |
Residual | 66.326214 67 .989943487 | Adj R-squared = 0.9212 | Root MSE = .994959 |
Total | 867 69 12.5652174 | Res. dev. = 193.0865 |
rep78 | Coefficient Std. err. t P>|t| [95% conf. interval] | |
/b0 | 3.435331 .1604479 21.41 0.000 3.115075 3.755586 | |
/b1 | 1.947203 1.767421 1.10 0.275 -1.580584 5.474989 | |
We can write an nl program to fit the same model. We may want to do this if we need to be able to fit the model with different variables.
program nlnexpgr, rclass version 18 syntax varlist(min=2 max=2) if local lhs : word 1 of `varlist' local rhs : word 2 of `varlist' return local eq "`lhs'={b0=1}*(1-exp(-{b1=0.1}*`rhs'))" end
This program will give the same results as we obtained above:
. sysuse auto, clear (1978 automobile data) . nl nexpgr : rep78 headroom, nolog
Source | SS df MS | Number of obs = 69 | |
Model | 800.67379 2 400.336893 | R-squared = 0.9235 | |
Residual | 66.326214 67 .989943487 | Adj R-squared = 0.9212 | Root MSE = .994959 |
Total | 867 69 12.5652174 | Res. dev. = 193.0865 |
rep78 | Coefficient Std. err. t P>|t| [95% conf. interval] | |
/b0 | 3.435331 .1604479 21.41 0.000 3.115075 3.755586 | |
/b1 | 1.947203 1.767421 1.10 0.275 -1.580584 5.474989 | |
Now let’s consider how we would fit the same model with ml, which uses maximum likelihood. We can change the equation into
e = y − f(x)
If we assume that the errors are distributed as N(0,σ2), the density function is as follows:
Then, the log-likelihood function will be
lnL = -0.5*ln(2*_pi) - ln(σ) - 0.5*e2/σ2 = -0.5*ln(2*_pi) - ln(σ) - 0.5*(y-f(x))2/σ2
In our example, we have
f(x) = b0*(1-exp(-b1*x))
Considering the formula, we may treat b0 as a parameter and b1*x as the linear combination xb, which has no constant term. We also need to estimate another parameter, “sigma”, the standard deviation of the normal distribution.
program mlnexpgr version 18 args lnf b1x b0 sigma tempvar res quietly gen double `res' = $ML_y1 - `b0'*(1-exp(-`b1x')) quietly replace `lnf' = -0.5*ln(2*_pi)-ln(`sigma')-0.5*`res'^2/`sigma'^2 end
. ml model lf mlnexpgr (b1: rep78 = headroom, nocons) (b0:) (sigma:) . ml max (output omitted) Number of obs = 69 Wald chi2(1) = 2.85 Log likelihood = -96.543274 Prob > chi2 = 0.0912
rep78 | Coefficient Std. err. z P>|z| [95% conf. interval] | |
b1 | ||
headroom | 1.947203 1.152936 1.69 0.091 -.3125097 4.206915 | |
b0 | ||
_cons | 3.435331 .1375282 24.98 0.000 3.16578 3.704881 | |
sigma | ||
_cons | .9804333 .08346 11.75 0.000 .8168547 1.144012 | |
The estimates we obtained from ml are close to those from the nl program. The estimated standard errors are a little different, but we know that they are from different algorithms.
We could extend the ml program to estimate robust variances with the vce(robust) option or with the vce(cluster clustvar) option.
. ml model lf mlnexpgr (b1:rep78=headroom,nocons) (b0:) (sigma:), vce(robust) . ml max (output omitted) Number of obs = 69 Wald chi2(1) = 3.98 Log pseudolikelihood = -96.543274 Prob > chi2 = 0.0462
Robust | ||
rep78 | Coefficient std. err. z P>|z| [95% conf. interval] | |
b1 | ||
headroom | 1.947203 .9765424 1.99 0.046 .0332149 3.861191 | |
b0 | ||
_cons | 3.435331 .1145494 29.99 0.000 3.210818 3.659843 | |
sigma | ||
_cons | .9804333 .0743649 13.18 0.000 .8346808 1.126186 | |
The robust estimation gives the same estimated coefficients, but it gives adjusted standard errors for the three parameters in this model. We may also apply this program for clustered data:
. ml model lf mlnexpgr (b1:rep78=headroom,nocons) (b0:) (sigma:), vce(cluster turn)