Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: too long of string in mata?
From
Pedro Nakashima <[email protected]>
To
[email protected]
Subject
Re: st: too long of string in mata?
Date
Wed, 26 Oct 2011 15:38:35 -0200
Bad news..
Nick, is ther any form to confirm it?
Pedro.
2011/10/8 Nick Cox <[email protected]>:
> I get that result too with the -tokens()- function (not a command).
> Seems that you are being bitten by some limit in Mata, perhaps even a
> bug.
>
> Nick
>
> On Sat, Oct 8, 2011 at 6:52 PM, Ozimek, Adam <[email protected]> wrote:
>> Stata users,
>>
>> I'm trying to read a dataset with very long text field (a continuation of a problem I've asked multiple questions about on statalist, thanks for all your help and patience so far). I'm finding Mata splitting the text into two fields when the text is over 503 characters long. Here is an example where I've created an arbitrary text with indicators at the 100 character points:
>>
>>
>> . type long.txt
>> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx100xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
>>> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx200xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
>>> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx300xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
>>> xxxxxxxx400xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx500xxxxxxxxxxxxxxxxxx
>>> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx600
>>
>> . file open testname using long.txt, read text
>>
>> . file read testname test
>>
>> . mata
>> ------------------------------------------------- mata (type end to exit) ---------------------------------------------------------
>>
>> : fields = tokens(st_local("test"),char(9))
>>
>> : fields
>> [1, 1] = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx100xxxxxxxxxxxxxxxxxxxx
>>> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx200xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
>>> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx300xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
>>> xxxxxxxxxxxxxxxxxxx400xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx500xxx
>> [1, 2] = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx600
>>
>>
>> Is there some way to tell the tokens command to not split the text at 503 characters?
>
> *
> * For searches and help try:
> * http://www.stata.com/help.cgi?search
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
>
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/