| |
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: RE: RE: RE: Significance stars
For what it's worth, I personally often use significance starring of
P-values according to its original rationale, ie as indicating a
footnote at the bottom of the table. In that footnote, I state that
these P-values are in the discovery set of a multiple-test procedure,
such as the Simes procedure controlling the false discovery rate at
0.05, or the Holm procedure controlling the family-wise error rate at
0.05. That rationale arguably makes sense if a lot of P-values are being
reported, and we might expect 5 percent of them to be nominally
significant (P<=0.05), even of all null hypotheses are true.
Roger
Roger Newson
Lecturer in Medical Statistics
Respiratory Epidemiology and Public Health Group
National Heart and Lung Institute
Imperial College London
Royal Brompton campus
Room 33, Emmanuel Kaye Building
1B Manresa Road
London SW3 6LR
UNITED KINGDOM
Tel: +44 (0)20 7352 8121 ext 3381
Fax: +44 (0)20 7351 8322
Email: [email protected]
www.imperial.ac.uk/nhli/r.newson/
Opinions expressed are those of the author, not of the institution.
-----Original Message-----
From: [email protected]
[mailto:[email protected]] On Behalf Of Nick Cox
Sent: 18 March 2007 16:46
To: [email protected]
Subject: st: RE: RE: Significance stars
Thanks for your testimony. Naturally, real life
is complicated and short. I too sometimes use
P < 0.05 as an indicator of what's worth taking
seriously, although always in combination with
other criteria. And I too sometimes compromise
reluctantly with reviewers for the sake of getting
a paper published.
All I can say on the last is that the Stata Journal
disapproves very mightily!
What I find interesting is the apparent lack of
_any_ good reason for starring. The social facts that
many people do it and that a few people even insist on
it are not in question. It's the rationale I seek.
Nick
[email protected]
Anderson, Bradley J
> Interesting history regarding the use of * and ** and I
> strongly agree with your comments. Unfortunately, many
> editors and reviewers regard a certain level of Type I error
> (usually < .05) as a sacred criterion that defines what's
> important, and what's not important. And what gets published
> and what does not get published. Indeed, I've had editors
> who have required us to remove p-values and confidence
> intervals in favor of * and **.
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
*
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/