Thanks very much David for your reply which is extremely helpful.
best
wiji
ps: Wiji is a she!
=================================
Professor Wiji Arulampalam,
Department of Economics,
University of Warwick,
Coventry,
CV4 7AL,
UK.
Tel: +44 (24) 7652 3471
Sec. Tel: +44 (24) 7652 3202
Fax: +44 (24) 7652 3032
email: [email protected]
http://www.warwick.ac.uk/Economics/arulampalam/
RES2003: http://www.warwick.ac.uk/res2003/
>>> [email protected] 09/19/02 12:57AM >>>
In the on going discussion about -xtprobit- Wiji Arulampalam
<[email protected]> has raised several interesting questions.
Let's begin with his comment that
> I am dealing with models which have many variables . The models are
> estimated using very large datasets. This is the first time I had
> used xtprobit with lagged dependent variable (LDV) as a regressor.
> When I didn't get any std. errors for rho or ln(sig2) but got proper
> std. errors for the other coefficient estimates and when these
> coefficient estimates were the same as the ordinary probit, I
> automatically assumed that there was no unobserved heterogeneity!
> Note, this problem only occurs when I have an L DV. But my
> conclusions were obviously wrong - this I found out when I checked
> it on Limdep. In the past I have always used Limdep for RE probit
> and I have had no problems at all. The only reason I used xtprobit
> was because of the size of the data set and model and I thought that
> it would be easier given that my data was stored as a stata file.
While there is some intuition here, the conclusion that there is no
individual level heterogeneity does not hold. The reason is that you cannot
compare two sets of parameter estimates when one of them does not come from
a converged optimization procedure.
Missing standard errors indicate that the esimated variance-covariance (VCE)
matrix is not of full rank. After an ML procedure without constraints, an
estimated VCE of less than full rank indicates that the optimization has not
converged. So those unconverged -xtprobit- estimates with missing standard
errors cannot be compared to the -probit- estimates.
> It seems a bit too much to ask us to try different starting values
> to check for sensitivity of the results when one is estimating a
> very large model using a lot of data. How is one supposed to decide
> whether the fact that the program gave the same answer as the
> ordinary probit was because of unstable optimisation or whether it
> was because of no unobserved heterogeneity? How would I
> discriminate between these two?
There are two points here. One is that is that the procedure Weihau and I
recomended yesterday is going to be quite time consuming with large
datasets. This is true. However, the problem that Wiji is having with
missing standard errors is due to the fact that the current combination of
optimization method and aproximation to the integral method used in
-xtprobit- is not sufficiently robust. While the method of using a
grid of starting values will be slow, it will probably overcome this
lack-or-robustness for most datasets.
Second, Wiji has asked how can one distinguish between unconverged results
and no unobserved heterogeneity. In fact, he refined this question into two
sub-questions in a subsequent post.
> 1. If stata does not move from the rho=0.0 case and produces missing
> std errors for rho and lnsig2, how do I know whether this is an
> unstable numerical problem or that rho is genuinely zero?
> 2. How come stata gives proper std errors for the other coefficient
> estimates when it is also giving missing std errors for rho and
> lnsig2?
Allow to me provide a short and long version of my answer.
Short version:
If there are missing standard errors for any of the estimated coefficients
from -xtprobit- and no constraints have been applied, treat the results as
if they were from an a non-coverged maximization procedure. Try specifying
different starting values. I continue to recommend the procedure that
Weihau and I described yesterday.
Second, if there are no missing standard errors and the results pass
-quadchk-, then use a likelihood ratio test for inference with respect to
the null hypothesis that there is no unobserved individual level
heterogeneity. When no starting values are specified this test is reported
at the bottom of the output. When starting values are specified, it must be
calculated by hand. Below in my long version I give an example of how to
perform this calculation.
Long version:
In this version, let's partition Wiji's questions into four parts.
i) How can I distinguish converged from unconverged results?
ii) If I obtain unconverged results with the default starting values,
how can I try and get converged values.
iii) If I have converged results from -xtprobit-, how can I tell if
they are stable.
iv) If I have converged, stable, results, how can test the null
hypothesis of no unobserved heterogeneity.
Let's begin with question i) and let's use simulated data. In the next
several lines of output, I generate data from a simple cross-sectional
-probit- model, although I do not provide an -id- variable for the panel
routine to use.
. set obs 500
obs was 0, now 500
. gen id = _n
. expand 10
(4500 observations created)
. set seed 123456
. gen double x1 = invchi2(2,uniform()) - 2
. gen double x2 = invchi2(2,uniform()) - 2
. gen double e = invnorm(uniform())
. gen double ystar = 3*(1 + x1 + x2)+ e
. gen y = (ystar>0)
Now, let's use -probit- to estimate the coefficients of this model.
. probit y x1 x2
Iteration 0: log likelihood = -3439.6806
Iteration 1: log likelihood = -1820.8642
Iteration 2: log likelihood = -1136.0018
Iteration 3: log likelihood = -768.83952
Iteration 4: log likelihood = -588.73511
Iteration 5: log likelihood = -511.98949
Iteration 6: log likelihood = -491.23031
Iteration 7: log likelihood = -489.35275
Iteration 8: log likelihood = -489.33515
Probit estimates Number of obs = 5000
LR chi2(2) = 5900.69
Prob > chi2 = 0.0000
Log likelihood = -489.33515 Pseudo R2 = 0.8577
------------------------------------------------------------------------------
y | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
x1 | 3.040576 .1270333 23.94 0.000 2.791595 3.289556
x2 | 2.991092 .1270602 23.54 0.000 2.742059 3.240125
_cons | 2.970903 .1338831 22.19 0.000 2.708496 3.233309
------------------------------------------------------------------------------
note: 469 failures and 1394 successes completely determined.
As one would expect, the -probit- estimates are very close to their true
values of 3.
Since we will need it below, let's save of the value of the log-likelihood.
. scalar ll0 = e(ll)
We will also find these estimated coefficients useful, so let's save them
off as well.
. mat b =e(b)
Now, let's try to estimate the paramaters of an -xtprobit- model on this
data.
. xtprobit y x1 x2 , i(id)
Fitting comparison model:
Iteration 0: log likelihood = -3439.6806
Iteration 1: log likelihood = -1820.8642
Iteration 2: log likelihood = -1136.0018
Iteration 3: log likelihood = -768.83952
Iteration 4: log likelihood = -588.73511
Iteration 5: log likelihood = -511.98949
Iteration 6: log likelihood = -491.23031
Iteration 7: log likelihood = -489.35275
Iteration 8: log likelihood = -489.33515
Fitting full model:
rho = 0.0 log likelihood = -489.33515
rho = 0.1 log likelihood = -490.43994
Iteration 0: log likelihood = -489.33514
Random-effects probit Number of obs = 5000
Group variable (i) : id Number of groups = 500
Random effects u_i ~ Gaussian Obs per group: min = 10
avg = 10.0
max = 10
Wald chi2(2) = 589.48
Log likelihood = -489.33514 Prob > chi2 = 0.0000
------------------------------------------------------------------------------
y | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
x1 | 3.040577 .1270448 23.93 0.000 2.791574 3.28958
x2 | 2.991093 .1270721 23.54 0.000 2.742036 3.24015
_cons | 2.970904 .1338946 22.19 0.000 2.708475 3.233332
-------------+----------------------------------------------------------------
/lnsig2u | -14 . . .
-------------+----------------------------------------------------------------
sigma_u | .0009119 . . .
rho | 8.32e-07 . . .
------------------------------------------------------------------------------
Likelihood ratio test of rho=0: chibar2(01) = 0.00 Prob >= chibar2 = 1.000
Using the default starting values, we obtain results with missing standard
errors. These are not converged results. So I make no attempt to interpret
them.
At this point, we move on to the second question:
ii) If I obtain unconverged results with the default starting values,
how can I try and get converged values.
The answer is to try different starting values. I recomend using the -probit-
estimates for the coefficients and a grid of points for rho.
In my search for converged results I am going to specify some different
starting values. And then use -xtprobit- in another attempt to obtain
converged results.
. mat b0 = b, (ln(.1/(1-.1)))
. xtprobit y x1 x2 , i(id) from(b0, copy)
Iteration 0: log likelihood = -490.43993
Iteration 1: log likelihood = -489.40071
Iteration 2: log likelihood = -489.25836
Iteration 3: log likelihood = -489.22553
Iteration 4: log likelihood = -489.22079
Iteration 5: log likelihood = -489.22057
Iteration 6: log likelihood = -489.22057
Random-effects probit Number of obs = 5000
Group variable (i) : id Number of groups = 500
Random effects u_i ~ Gaussian Obs per group: min = 10
avg = 10.0
max = 10
Wald chi2(2) = 363.32
Log likelihood = -489.22057 Prob > chi2 = 0.0000
------------------------------------------------------------------------------
y | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
x1 | 3.088922 .16568 18.64 0.000 2.764195 3.413649
x2 | 3.035387 .1604369 18.92 0.000 2.720936 3.349837
_cons | 3.015358 .1663938 18.12 0.000 2.689232 3.341484
-------------+----------------------------------------------------------------
/lnsig2u | -3.420701 2.189146 -7.711349 .8699469
-------------+----------------------------------------------------------------
sigma_u | .1808024 .1979015 .0211593 1.544922
rho | .0316547 .0671033 .0004475 .7047346
------------------------------------------------------------------------------
Since there are no missing values for the standard errors, I have converged
results. Unfortunately, specifying starting values prevented the likelihood
ratio test from being automatically computed. Since I will have to do it by
hand, I save off the value of the log-likelihood.
. scalar ll1 = e(ll)
Before I proceed with interpreting the results, I need to check that the
converged results I have found do not vary unacceptably with the number of
quadrature points used in aproximating the integral. In other words, I need
to answer question iii). The answer is that I use -quadchk-. -quadchk-
will reestimate the model for me with plus and minus 4 the number of
quadrature points that I currently have. (You can change these numbers, see
[R] quadchk.) If the estimates do not vary by much, then they are stable.
. quadchk, noout
Refitting model quad() = 8
Refitting model quad() = 16
Quadrature check
Fitted Comparison Comparison
quadrature quadrature quadrature
12 points 8 points 16 points
-----------------------------------------------------
Log -489.22057 -489.22057 -489.22057
likelihood 3.968e-11 1.717e-11 Difference
-8.110e-14 -3.509e-14 Relative difference
-----------------------------------------------------
y: 3.0889218 3.0889218 3.0889218
x1 2.911e-11 -8.053e-12 Difference
9.424e-12 -2.607e-12 Relative difference
-----------------------------------------------------
y: 3.0353869 3.0353869 3.0353869
x2 2.711e-11 -7.379e-12 Difference
8.931e-12 -2.431e-12 Relative difference
-----------------------------------------------------
y: 3.0153583 3.0153583 3.0153583
_cons 2.665e-11 -7.402e-12 Difference
8.838e-12 -2.455e-12 Relative difference
-----------------------------------------------------
lnsig2u: -3.420701 -3.420701 -3.420701
_cons 5.401e-10 -2.439e-10 Difference
-1.579e-10 7.132e-11 Relative difference
-----------------------------------------------------
Since the results are nearly invariant to the number of quadrature points, I
conclude that they are stable and I can confidently proceed with
interpretation.
Now, let's look into quesiton iv). The test of the null that there is no
heterogeneity is a test of the null hypothesis that sigma_u = 0. This is a
test on the boundary of the parameter space for sigma_u. (See help j_chibar
for more on this topic and a reference.)
The computation of the test statisitic remains the same, but its asymptotic
distribution is different than that of a stardard likelihood ratio test.
Instead of converging to chi-squared with one degree of freedom it converges
to another distribution which is basically .5*(chi-squared with one degree
of freedom).
Let's compute the value of the likelihood ratio test and its p-value.
. scalar lr = 2*(ll1-ll0)
. scalar p = .5*chi2tail(1,lr)
And display the results:
. di "hand lr is " lr " with p-value " p
hand lr is .22917373 with p-value .31606859
This result indicates that we fail to reject the null hypothesis of no
heterogeneity, or equivalently, that sigma_u =0
I hope that this helps.
David
[email protected]
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/