You should set the seed to an identical value before running each of these
commands (see -help seed-):
. set seed 123
. first_command
. set seed 123
. second_command
. set seed 123
. third_command
--jean
> Dear Nick, thanks for your response.
>
> I dont know if I should conclude that it is because of the randomness
> that we expect. For example for one of the variable I am getting a std
> error
> of .1238498 in method 2 but .0903295 in method 3 which is causing a shift
> in p value from 0.101 to 0.025. Is that strange? If we are to take a
> decision on the significance of a coefficient based on the P-value which
> p-value (among the three we get) should we accept?
>
> Thanks,
> Rijo John.
>
> On Thu, 24 Jun 2004, Nick Cox wrote:
>
> :You mean differing other than because of
> :randomness?
> :
> :Nick
> :[email protected]
> :
> :Rijo John
> :
> :> I was doing a bootstrapping exercise for the same quantile
> :> regression in
> :> three ways.
> :> 1) bs "qreg y x1 x2 x3....x22" "_b[x1] _b[x2]...._b[x22] _[_const],
> :> reps(20)
> :> 2) sqreg y x1 x2 ...... x22
> :> 3) bsqreg y x1 x2 ......x22
> :>
> :> Why am I getting different bootstrap standard errors in each
> :> one of this
> :> method? The stata help says these methods are same. The
> :> coefficients I am
> :> getting are same though. Can anyone tell me which one of this is to be
> :> accepted or most widely used?
>
> *
> * For searches and help try:
> * http://www.stata.com/support/faqs/res/findit.html
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
>
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/