Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: gradient in -ml- producing type mismatch?
From
Misha Spisok <[email protected]>
To
[email protected]
Subject
Re: st: gradient in -ml- producing type mismatch?
Date
Sat, 8 May 2010 16:43:32 -0700
Thank you, Professor Buis. Now I have more questions--hopefully a bit
better than the previous one.
Summary of questions:
1. What does `lnf' do when it precedes -mlvecsum-?
2. What else can go in that position?
3. In the syntax of -mlvecsum-, what does `scalarname_lnf' do?
4. In general, what, precisely, does -mlvecsum- implicitly do? (my
suspicion is that it multiplies the row vector, `x', of "independent
variables by `exp')
5. How can I build up my gradient function?
Believe it or not, I had read the [R] manual on this, as well as a few
presentations by Prof. Baum. One source of confusion (among others,
to be sure) is that I read somewhere (else?) that -mlvecsum-
implicitly multiplies the row vector `x' (of the "independent"
variables) by what follows the equals sign; I thought this might have
been contingent on the `lnf' that seems to precede every instance of
-mlvecsum- that I can find. So a first question is, what does `lnf'
do when it precedes -mlvecsum-?
Re-reading the manual section after your comment helps (I think),
clarify some things, but I still have questions.
Ultimately, I want a gradient that looks like this (please pardon the
misuse and abuse of syntax):
$ML_y1*`x' - ($ML_y1/`sumvij')*sum(by group)`x'*exp(`theta')
where `x' is a row vector, which seems to be handled by -mlvecsum-.
The [R] manual states that the syntax for -mlvecsum- is
mlvecsum scalarname_lnf rowvecname = exp [if] [, eq(#)]
What, precisely, does "scalarname_lnf" mean? I've only seen `lnf' in
this position.
"rowvecname" seems straightforward; it's the name of the row vector to
be generated. "exp" is a little less clear. My understanding is that
whatever is in "exp" is (1) assumed to be a scalar and (2) multiplies
the row vector of independent variables (`x') by that scalar. The
syntax in the manual isn't clear to me.
It seems that the first term, $ML_y1*`x', could be written as,
mlvecsum ? `partone' = $ML_y1.
I put "?" because I've only seen `lnf' follow -mlvecsum-, but I don't
want the row vector multiplied by `lnf'.
The second part is trickier (for me). Misusing and abusing syntax,
I'd like to do something like,
by group: sum(mlvecsum ($ML_y1/`sumvij') `parttwo' = exp(`theta')))
Or, thinking about another way to misuse and abuse syntax,
mlvecsum ? `xexb' = exp(`theta')
by group: egen double `sumxexb' = sum(`xexb')
mlvecsum ? `parttwo' = ($ML_y1/`sumvij')*`sumxexb' /* but I don't want
it multiplied by `x'... */
matrix `g' = `partone' - `parttwo'
I started to think that it's impossible to calculate my gradient due
to a violatation of the necessary functional form, but I don't think
so because the log-likelihood function is, in fact, the sum of each
observation's log-likelihood function. I imagine (hope?) that a
little insight and experience might confirm or deny this and, further
(if possible), suggest some approaches to solving my problem. I found
some lecture notes from MIT that mentioned this very model as a
"difficult" exercise, which suggests that it's "doable."
Below is an (admittedly ugly, notationally cumbersome, and somewhat
tedious) illustration of what I'm trying to do (focusing on
`parttwo'). (This may explain why I'm trying to work with -mata- in
my other post, as I can do this in Matlab, but would like to implement
it in Stata to take advantage of some of Stata's features; plus, in
spite of my frustration at this point, it's kinda fun.)
Thanks again.
Misha
Suppose this is my data:
Data group choice y x1 x2 x3
1 1 y_{1,1} x1_{1,1} x2_{1,1} x3_{1,1}
1 2 y_{1,2} x1_{1,2} x2_{1,2} x3_{1,2}
1 3 y_{1,3} x1_{1,3} x2_{1,3} x3_{1,3}
1 4 y_{1,4} x1_{1,4} x2_{1,4} x3_{1,4}
1 5 y_{1,5} x1_{1,5} x2_{1,5} x3_{1,5}
1 6 y_{1,6} x1_{1,6} x2_{1,6} x3_{1,6}
1 7 y_{1,7} x1_{1,7} x2_{1,7} x3_{1,7}
1 8 y_{1,8} x1_{1,8} x2_{1,8} x3_{1,8}
1 9 y_{1,9} x1_{1,9} x2_{1,9} x3_{1,9}
1 10 y_{1,10} x1_{1,10} x2_{1,10} x3_{1,10}
2 1 y_{2,1} x1_{2,1} x2_{2,1} x3_{2,1}
2 2 y_{2,2} x1_{2,2} x2_{2,2} x3_{2,2}
2 3 y_{2,3} x1_{2,3} x2_{2,3} x3_{2,3}
2 4 y_{2,4} x1_{2,4} x2_{2,4} x3_{2,4}
2 5 y_{2,5} x1_{2,5} x2_{2,5} x3_{2,5}
2 6 y_{2,6} x1_{2,6} x2_{2,6} x3_{2,6}
2 7 y_{2,7} x1_{2,7} x2_{2,7} x3_{2,7}
2 8 y_{2,8} x1_{2,8} x2_{2,8} x3_{2,8}
2 9 y_{2,9} x1_{2,9} x2_{2,9} x3_{2,9}
2 10 y_{2,10} x1_{2,10} x2_{2,10} x3_{2,10}
If I understand correctly, by defintion, `theta' is as follows:
group choice theta
1 1 x1_{1,1}b1 + x2_{1,1}b2 + x3_{1,1}b3
1 2 x1_{1,2}b1 + x2_{1,2}b2 + x3_{1,2}b3
1 3 x1_{1,3}b1 + x2_{1,3}b2 + x3_{1,3}b3
1 4 x1_{1,4}b1 + x2_{1,4}b2 + x3_{1,4}b3
1 5 x1_{1,5}b1 + x2_{1,5}b2 + x3_{1,5}b3
1 6 x1_{1,6}b1 + x2_{1,6}b2 + x3_{1,6}b3
1 7 x1_{1,7}b1 + x2_{1,7}b2 + x3_{1,7}b3
1 8 x1_{1,8}b1 + x2_{1,8}b2 + x3_{1,8}b3
1 9 x1_{1,9}b1 + x2_{1,9}b2 + x3_{1,9}b3
1 10 x1_{1,10}b1 + x2_{1,10}b2 + x3_{1,10}b3
2 1 x1_{2,1}b1 + x2_{2,1}b2 + x3_{2,1}b3
2 2 x1_{2,2}b1 + x2_{2,2}b2 + x3_{2,2}b3
2 3 x1_{2,3}b1 + x2_{2,3}b2 + x3_{2,3}b3
2 4 x1_{2,4}b1 + x2_{2,4}b2 + x3_{2,4}b3
2 5 x1_{2,5}b1 + x2_{2,5}b2 + x3_{2,5}b3
2 6 x1_{2,6}b1 + x2_{2,6}b2 + x3_{2,6}b3
2 7 x1_{2,7}b1 + x2_{2,7}b2 + x3_{2,7}b3
2 8 x1_{2,8}b1 + x2_{2,8}b2 + x3_{2,8}b3
2 9 x1_{2,9}b1 + x2_{2,9}b2 + x3_{2,9}b3
2 10 x1_{2,10}b1 + x2_{2,10}b2 + x3_{2,10}b3
`sumvij' is
group choice sumvij: note that sumvij_{i,j} = sumvij_{i,k} for all j and k
1 1 exp(theta_{1,1}) + exp(theta_{1,2}) + … + exp(theta_{1,10})
1 2 exp(theta_{1,1}) + exp(theta_{1,2}) + … + exp(theta_{1,10})
1 3 exp(theta_{1,1}) + exp(theta_{1,2}) + … + exp(theta_{1,10})
1 4 exp(theta_{1,1}) + exp(theta_{1,2}) + … + exp(theta_{1,10})
1 5 exp(theta_{1,1}) + exp(theta_{1,2}) + … + exp(theta_{1,10})
1 6 exp(theta_{1,1}) + exp(theta_{1,2}) + … + exp(theta_{1,10})
1 7 exp(theta_{1,1}) + exp(theta_{1,2}) + … + exp(theta_{1,10})
1 8 exp(theta_{1,1}) + exp(theta_{1,2}) + … + exp(theta_{1,10})
1 9 exp(theta_{1,1}) + exp(theta_{1,2}) + … + exp(theta_{1,10})
1 10 exp(theta_{1,1}) + exp(theta_{1,2}) + … + exp(theta_{1,10})
2 1 exp(theta_{2,1}) + exp(theta_{2,2}) + … + exp(theta_{2,10})
2 2 exp(theta_{2,1}) + exp(theta_{2,2}) + … + exp(theta_{2,10})
2 3 exp(theta_{2,1}) + exp(theta_{2,2}) + … + exp(theta_{2,10})
2 4 exp(theta_{2,1}) + exp(theta_{2,2}) + … + exp(theta_{2,10})
2 5 exp(theta_{2,1}) + exp(theta_{2,2}) + … + exp(theta_{2,10})
2 6 exp(theta_{2,1}) + exp(theta_{2,2}) + … + exp(theta_{2,10})
2 7 exp(theta_{2,1}) + exp(theta_{2,2}) + … + exp(theta_{2,10})
2 8 exp(theta_{2,1}) + exp(theta_{2,2}) + … + exp(theta_{2,10})
2 9 exp(theta_{2,1}) + exp(theta_{2,2}) + … + exp(theta_{2,10})
2 10 exp(theta_{2,1}) + exp(theta_{2,2}) + … + exp(theta_{2,10})
As an intermediary step, I want this row vector, which I'll call
`sumxexb' = (`sumxexb1', `sumxexb2', `sumxexb3'), noting that
sumxexb_{i,j} = sumxexb_{i,k} for all j and k.
group choice corresponding to x1: call it `sumxexb1'; note that
sumxexb1_{i,j} = sumxexb1_{i,k} for all j and k corresponding to x2:
call it `sumxexb2'; note that sumxexb2_{i,j} = sumxexb2_{i,k} for all
j and k corresponding to x3: call it `sumxexb3'; note that
sumxexb3_{i,j} = sumxexb3_{i,k} for all j and k
1 1 x1_{1,1}*exp(theta_{1,1}) + x1_{1,2}*exp(theta_{1,2}) + … +
x1_{1,10}*exp(theta_{1,10}) x2_{1,1}*exp(theta_{1,1}) +
x2_{1,2}*exp(theta_{1,2}) + … +
x2_{1,10}*exp(theta_{1,10}) x3_{1,1}*exp(theta_{1,1}) +
x3_{1,2}*exp(theta_{1,2}) + … + x3_{1,10}*exp(theta_{1,10})
1 2 x1_{1,1}*exp(theta_{1,1}) + x1_{1,2}*exp(theta_{1,2}) + … +
x1_{1,10}*exp(theta_{1,10}) x2_{1,1}*exp(theta_{1,1}) +
x2_{1,2}*exp(theta_{1,2}) + … +
x2_{1,10}*exp(theta_{1,10}) x3_{1,1}*exp(theta_{1,1}) +
x3_{1,2}*exp(theta_{1,2}) + … + x3_{1,10}*exp(theta_{1,10})
1 3 x1_{1,1}*exp(theta_{1,1}) + x1_{1,2}*exp(theta_{1,2}) + … +
x1_{1,10}*exp(theta_{1,10}) x2_{1,1}*exp(theta_{1,1}) +
x2_{1,2}*exp(theta_{1,2}) + … +
x2_{1,10}*exp(theta_{1,10}) x3_{1,1}*exp(theta_{1,1}) +
x3_{1,2}*exp(theta_{1,2}) + … + x3_{1,10}*exp(theta_{1,10})
1 4 x1_{1,1}*exp(theta_{1,1}) + x1_{1,2}*exp(theta_{1,2}) + … +
x1_{1,10}*exp(theta_{1,10}) x2_{1,1}*exp(theta_{1,1}) +
x2_{1,2}*exp(theta_{1,2}) + … +
x2_{1,10}*exp(theta_{1,10}) x3_{1,1}*exp(theta_{1,1}) +
x3_{1,2}*exp(theta_{1,2}) + … + x3_{1,10}*exp(theta_{1,10})
1 5 x1_{1,1}*exp(theta_{1,1}) + x1_{1,2}*exp(theta_{1,2}) + … +
x1_{1,10}*exp(theta_{1,10}) x2_{1,1}*exp(theta_{1,1}) +
x2_{1,2}*exp(theta_{1,2}) + … +
x2_{1,10}*exp(theta_{1,10}) x3_{1,1}*exp(theta_{1,1}) +
x3_{1,2}*exp(theta_{1,2}) + … + x3_{1,10}*exp(theta_{1,10})
1 6 x1_{1,1}*exp(theta_{1,1}) + x1_{1,2}*exp(theta_{1,2}) + … +
x1_{1,10}*exp(theta_{1,10}) x2_{1,1}*exp(theta_{1,1}) +
x2_{1,2}*exp(theta_{1,2}) + … +
x2_{1,10}*exp(theta_{1,10}) x3_{1,1}*exp(theta_{1,1}) +
x3_{1,2}*exp(theta_{1,2}) + … + x3_{1,10}*exp(theta_{1,10})
1 7 x1_{1,1}*exp(theta_{1,1}) + x1_{1,2}*exp(theta_{1,2}) + … +
x1_{1,10}*exp(theta_{1,10}) x2_{1,1}*exp(theta_{1,1}) +
x2_{1,2}*exp(theta_{1,2}) + … +
x2_{1,10}*exp(theta_{1,10}) x3_{1,1}*exp(theta_{1,1}) +
x3_{1,2}*exp(theta_{1,2}) + … + x3_{1,10}*exp(theta_{1,10})
1 8 x1_{1,1}*exp(theta_{1,1}) + x1_{1,2}*exp(theta_{1,2}) + … +
x1_{1,10}*exp(theta_{1,10}) x2_{1,1}*exp(theta_{1,1}) +
x2_{1,2}*exp(theta_{1,2}) + … +
x2_{1,10}*exp(theta_{1,10}) x3_{1,1}*exp(theta_{1,1}) +
x3_{1,2}*exp(theta_{1,2}) + … + x3_{1,10}*exp(theta_{1,10})
1 9 x1_{1,1}*exp(theta_{1,1}) + x1_{1,2}*exp(theta_{1,2}) + … +
x1_{1,10}*exp(theta_{1,10}) x2_{1,1}*exp(theta_{1,1}) +
x2_{1,2}*exp(theta_{1,2}) + … +
x2_{1,10}*exp(theta_{1,10}) x3_{1,1}*exp(theta_{1,1}) +
x3_{1,2}*exp(theta_{1,2}) + … + x3_{1,10}*exp(theta_{1,10})
1 10 x1_{1,1}*exp(theta_{1,1}) + x1_{1,2}*exp(theta_{1,2}) + … +
x1_{1,10}*exp(theta_{1,10}) x2_{1,1}*exp(theta_{1,1}) +
x2_{1,2}*exp(theta_{1,2}) + … +
x2_{1,10}*exp(theta_{1,10}) x3_{1,1}*exp(theta_{1,1}) +
x3_{1,2}*exp(theta_{1,2}) + … + x3_{1,10}*exp(theta_{1,10})
2 1 x1_{2,1}*exp(theta_{2,1}) + x1_{2,2}*exp(theta_{2,2}) + … +
x1_{2,10}*exp(theta_{2,10}) x2_{2,1}*exp(theta_{2,1}) +
x2_{2,2}*exp(theta_{2,2}) + … +
x2_{2,10}*exp(theta_{2,10}) x3_{2,1}*exp(theta_{2,1}) +
x3_{2,2}*exp(theta_{2,2}) + … + x3_{2,10}*exp(theta_{2,10})
2 2 x1_{2,1}*exp(theta_{2,1}) + x1_{2,2}*exp(theta_{2,2}) + … +
x1_{2,10}*exp(theta_{2,10}) x2_{2,1}*exp(theta_{2,1}) +
x2_{2,2}*exp(theta_{2,2}) + … +
x2_{2,10}*exp(theta_{2,10}) x3_{2,1}*exp(theta_{2,1}) +
x3_{2,2}*exp(theta_{2,2}) + … + x3_{2,10}*exp(theta_{2,10})
2 3 x1_{2,1}*exp(theta_{2,1}) + x1_{2,2}*exp(theta_{2,2}) + … +
x1_{2,10}*exp(theta_{2,10}) x2_{2,1}*exp(theta_{2,1}) +
x2_{2,2}*exp(theta_{2,2}) + … +
x2_{2,10}*exp(theta_{2,10}) x3_{2,1}*exp(theta_{2,1}) +
x3_{2,2}*exp(theta_{2,2}) + … + x3_{2,10}*exp(theta_{2,10})
2 4 x1_{2,1}*exp(theta_{2,1}) + x1_{2,2}*exp(theta_{2,2}) + … +
x1_{2,10}*exp(theta_{2,10}) x2_{2,1}*exp(theta_{2,1}) +
x2_{2,2}*exp(theta_{2,2}) + … +
x2_{2,10}*exp(theta_{2,10}) x3_{2,1}*exp(theta_{2,1}) +
x3_{2,2}*exp(theta_{2,2}) + … + x3_{2,10}*exp(theta_{2,10})
2 5 x1_{2,1}*exp(theta_{2,1}) + x1_{2,2}*exp(theta_{2,2}) + … +
x1_{2,10}*exp(theta_{2,10}) x2_{2,1}*exp(theta_{2,1}) +
x2_{2,2}*exp(theta_{2,2}) + … +
x2_{2,10}*exp(theta_{2,10}) x3_{2,1}*exp(theta_{2,1}) +
x3_{2,2}*exp(theta_{2,2}) + … + x3_{2,10}*exp(theta_{2,10})
2 6 x1_{2,1}*exp(theta_{2,1}) + x1_{2,2}*exp(theta_{2,2}) + … +
x1_{2,10}*exp(theta_{2,10}) x2_{2,1}*exp(theta_{2,1}) +
x2_{2,2}*exp(theta_{2,2}) + … +
x2_{2,10}*exp(theta_{2,10}) x3_{2,1}*exp(theta_{2,1}) +
x3_{2,2}*exp(theta_{2,2}) + … + x3_{2,10}*exp(theta_{2,10})
2 7 x1_{2,1}*exp(theta_{2,1}) + x1_{2,2}*exp(theta_{2,2}) + … +
x1_{2,10}*exp(theta_{2,10}) x2_{2,1}*exp(theta_{2,1}) +
x2_{2,2}*exp(theta_{2,2}) + … +
x2_{2,10}*exp(theta_{2,10}) x3_{2,1}*exp(theta_{2,1}) +
x3_{2,2}*exp(theta_{2,2}) + … + x3_{2,10}*exp(theta_{2,10})
2 8 x1_{2,1}*exp(theta_{2,1}) + x1_{2,2}*exp(theta_{2,2}) + … +
x1_{2,10}*exp(theta_{2,10}) x2_{2,1}*exp(theta_{2,1}) +
x2_{2,2}*exp(theta_{2,2}) + … +
x2_{2,10}*exp(theta_{2,10}) x3_{2,1}*exp(theta_{2,1}) +
x3_{2,2}*exp(theta_{2,2}) + … + x3_{2,10}*exp(theta_{2,10})
2 9 x1_{2,1}*exp(theta_{2,1}) + x1_{2,2}*exp(theta_{2,2}) + … +
x1_{2,10}*exp(theta_{2,10}) x2_{2,1}*exp(theta_{2,1}) +
x2_{2,2}*exp(theta_{2,2}) + … +
x2_{2,10}*exp(theta_{2,10}) x3_{2,1}*exp(theta_{2,1}) +
x3_{2,2}*exp(theta_{2,2}) + … + x3_{2,10}*exp(theta_{2,10})
2 10 x1_{2,1}*exp(theta_{2,1}) + x1_{2,2}*exp(theta_{2,2}) + … +
x1_{2,10}*exp(theta_{2,10}) x2_{2,1}*exp(theta_{2,1}) +
x2_{2,2}*exp(theta_{2,2}) + … +
x2_{2,10}*exp(theta_{2,10}) x3_{2,1}*exp(theta_{2,1}) +
x3_{2,2}*exp(theta_{2,2}) + … + x3_{2,10}*exp(theta_{2,10})
The final step for the second part is
group choice
1 1 (y_{1,1}/sumvij_{1,1})*sumxexb_{1,1}
1 2 (y_{1,2}/sumvij_{1,2})*sumxexb_{1,2}
1 3 (y_{1,3}/sumvij_{1,3})*sumxexb_{1,3}
1 4 (y_{1,4}/sumvij_{1,4})*sumxexb_{1,4}
1 5 (y_{1,5}/sumvij_{1,5})*sumxexb_{1,5}
1 6 (y_{1,6}/sumvij_{1,6})*sumxexb_{1,6}
1 7 (y_{1,7}/sumvij_{1,7})*sumxexb_{1,7}
1 8 (y_{1,8}/sumvij_{1,8})*sumxexb_{1,8}
1 9 (y_{1,9}/sumvij_{1,9})*sumxexb_{1,9}
1 10 (y_{1,10}/sumvij_{1,10})*sumxexb_{1,10}
2 1 (y_{2,1}/sumvij_{2,1})*sumxexb_{2,1}
2 2 (y_{2,2}/sumvij_{2,2})*sumxexb_{2,2}
2 3 (y_{2,3}/sumvij_{2,3})*sumxexb_{2,3}
2 4 (y_{2,4}/sumvij_{2,4})*sumxexb_{2,4}
2 5 (y_{2,5}/sumvij_{2,5})*sumxexb_{2,5}
2 6 (y_{2,6}/sumvij_{2,6})*sumxexb_{2,6}
2 7 (y_{2,7}/sumvij_{2,7})*sumxexb_{2,7}
2 8 (y_{2,8}/sumvij_{2,8})*sumxexb_{2,8}
2 9 (y_{2,9}/sumvij_{2,9})*sumxexb_{2,9}
2 10 (y_{2,10}/sumvij_{2,10})*sumxexb_{2,10}
Or, less succinctly,
1 1 (y_{1,1}/sumvij_{1,1})*sumxexb1_{1,1} (y_{1,1}/sumvij_{1,1})*sumxexb2_{1,1} (y_{1,1}/sumvij_{1,1})*sumxexb3_{1,1}
1 2 (y_{1,2}/sumvij_{1,2})*sumxexb1_{1,2} (y_{1,2}/sumvij_{1,2})*sumxexb2_{1,2} (y_{1,2}/sumvij_{1,2})*sumxexb3_{1,2}
1 3 (y_{1,3}/sumvij_{1,3})*sumxexb1_{1,3} (y_{1,3}/sumvij_{1,3})*sumxexb2_{1,3} (y_{1,3}/sumvij_{1,3})*sumxexb3_{1,3}
1 4 (y_{1,4}/sumvij_{1,4})*sumxexb1_{1,4} (y_{1,4}/sumvij_{1,4})*sumxexb2_{1,4} (y_{1,4}/sumvij_{1,4})*sumxexb3_{1,4}
1 5 (y_{1,5}/sumvij_{1,5})*sumxexb1_{1,5} (y_{1,5}/sumvij_{1,5})*sumxexb2_{1,5} (y_{1,5}/sumvij_{1,5})*sumxexb3_{1,5}
1 6 (y_{1,6}/sumvij_{1,6})*sumxexb1_{1,6} (y_{1,6}/sumvij_{1,6})*sumxexb2_{1,6} (y_{1,6}/sumvij_{1,6})*sumxexb3_{1,6}
1 7 (y_{1,7}/sumvij_{1,7})*sumxexb1_{1,7} (y_{1,7}/sumvij_{1,7})*sumxexb2_{1,7} (y_{1,7}/sumvij_{1,7})*sumxexb3_{1,7}
1 8 (y_{1,8}/sumvij_{1,8})*sumxexb1_{1,8} (y_{1,8}/sumvij_{1,8})*sumxexb2_{1,8} (y_{1,8}/sumvij_{1,8})*sumxexb3_{1,8}
1 9 (y_{1,9}/sumvij_{1,9})*sumxexb1_{1,9} (y_{1,9}/sumvij_{1,9})*sumxexb2_{1,9} (y_{1,9}/sumvij_{1,9})*sumxexb3_{1,9}
1 10 (y_{1,10}/sumvij_{1,10})*sumxexb1_{1,10} (y_{1,10}/sumvij_{1,10})*sumxexb2_{1,10} (y_{1,10}/sumvij_{1,10})*sumxexb3_{1,10}
2 1 (y_{2,1}/sumvij_{2,1})*sumxexb1_{2,1} (y_{2,1}/sumvij_{2,1})*sumxexb2_{2,1} (y_{2,1}/sumvij_{2,1})*sumxexb3_{2,1}
2 2 (y_{2,2}/sumvij_{2,2})*sumxexb1_{2,2} (y_{2,2}/sumvij_{2,2})*sumxexb2_{2,2} (y_{2,2}/sumvij_{2,2})*sumxexb3_{2,2}
2 3 (y_{2,3}/sumvij_{2,3})*sumxexb1_{2,3} (y_{2,3}/sumvij_{2,3})*sumxexb2_{2,3} (y_{2,3}/sumvij_{2,3})*sumxexb3_{2,3}
2 4 (y_{2,4}/sumvij_{2,4})*sumxexb1_{2,4} (y_{2,4}/sumvij_{2,4})*sumxexb2_{2,4} (y_{2,4}/sumvij_{2,4})*sumxexb3_{2,4}
2 5 (y_{2,5}/sumvij_{2,5})*sumxexb1_{2,5} (y_{2,5}/sumvij_{2,5})*sumxexb2_{2,5} (y_{2,5}/sumvij_{2,5})*sumxexb3_{2,5}
2 6 (y_{2,6}/sumvij_{2,6})*sumxexb1_{2,6} (y_{2,6}/sumvij_{2,6})*sumxexb2_{2,6} (y_{2,6}/sumvij_{2,6})*sumxexb3_{2,6}
2 7 (y_{2,7}/sumvij_{2,7})*sumxexb1_{2,7} (y_{2,7}/sumvij_{2,7})*sumxexb2_{2,7} (y_{2,7}/sumvij_{2,7})*sumxexb3_{2,7}
2 8 (y_{2,8}/sumvij_{2,8})*sumxexb1_{2,8} (y_{2,8}/sumvij_{2,8})*sumxexb2_{2,8} (y_{2,8}/sumvij_{2,8})*sumxexb3_{2,8}
2 9 (y_{2,9}/sumvij_{2,9})*sumxexb1_{2,9} (y_{2,9}/sumvij_{2,9})*sumxexb2_{2,9} (y_{2,9}/sumvij_{2,9})*sumxexb3_{2,9}
2 10 (y_{2,10}/sumvij_{2,10})*sumxexb1_{2,10} (y_{2,10}/sumvij_{2,10})*sumxexb2_{2,10} (y_{2,10}/sumvij_{2,10})*sumxexb3_{2,10}
On Sat, May 8, 2010 at 5:01 AM, Maarten buis <[email protected]> wrote:
> --- On Sat, 8/5/10, Misha Spisok wrote:
>> I'm getting the following error (-ml- program and example
>> that reproduces error is included below
> <snip>
>> gen double `obsg' = `partone' - $ML_y1*`myfrac'
>> matrix `g' = `obsg'
>
> `obsg' is a variable and `g' is a matrix, and the two
> don't go together.
>
> For a solution, see -help mlvecsum-.
>
> Hope this helps,
> Maarten
>
> --------------------------
> Maarten L. Buis
> Institut fuer Soziologie
> Universitaet Tuebingen
> Wilhelmstrasse 36
> 72074 Tuebingen
> Germany
>
> http://www.maartenbuis.nl
> --------------------------
>
>
>
>
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
>
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/