Importance weights do whatever you want them to do. For example,
specify your "weight" variable to be more than 1, say 10, so that the
iweight is less than 1. Then, in the auto dataset -reg price mpg
[iw=1/10]- will give you 7 observations only. (I had a typo in my
previous message, and the text got mangled) Thus, it can do the
opposite of what you claim.
See,
. help weights
I recommend you to stay away from importance weights in most cases,
but they are very handy in the rare cases when you as a programmer
need them. For example, in my confidence ellipse program (. findit
ellip) I implemented Bartels' "fractional pooling of disparate
observations" in the pool(#) option using fractional importance
weights to downweigh "problematic" observations, e.g. problematic in a
Bayesian sense.
Why do you want to use importance weights?
Anders Alexandersson
"B. Burcin Yurtoglu" <burcin.yurtoglu@u...> wrote:
> Dear All,
>
> The use of "importance" weights seems to increase the number of
observations in
> the regress command.
>
> E.g.
>
> reg y x
>
> produces an output say with 8 observations
>
> whereas
>
> reg y x [iw=1/weight]
> produces an output with 17 observations
>
> Does anybody have an explanation or suggestions for reading?
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/