One doesn't have to do much programming without coming across
situation where it is necessary to parse a list into individual tokens
for subsequent use, usually in some form of looping structure.
However, Stata's built in tokenize requires additional manipulation
of the tokens, especially if a non space parsing character is used.
On Oct 04/2006 I posted a question regarding tokenize
( http://www.stata.com/statalist/archive/2006-10/msg00178.html ) and
received several excellent responses including a Mata routine and
direction to Nick Cox's tknz command. The latter was more to my
purpose but still included any non space parsing character as one of
the tokens. I took tknz and made some modifications (with Nick's
permission) which are now available on SSC, specifically:
(1) causing its no-options default to behave exactly as tokenize does
(2) allowing an option to drop non space parsing characters to get a
"clean" list
(3) returning the number of parsed tokens in s(items)
Examples:
tknz "fe,fi,fo,fum" , s(v) p(,) nochar
or
local choices = "fe,fi,fo,fum"
tknz `"`choices'"' , s(v) p(,) nochar
After tknz, one can directly process the list with:
forval i=1/`s(items)' {
di as res "Token `i' is `v`i''"
}
The modified routine can be accessed with:
-net describe tknz, from(http://fmwww.bc.edu/RePEc/bocode/t)-
I hope this proves useful to some of you.
DCE
--
David Elliott
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/