I'd assert, perhaps very rashly, that beyond
some threshold, very low P-values are
practically indistinguishable. I suppose that
log P-value of -20 is often appealing as a kind of
thermonuclear demolition of a null hypothesis, but I wonder
if anyone would think differently of (say) -6. Also,
as is well known, the further you go out into
the tail the more you depend on everything being
as it be (model assumptions, data without
measurement error, numerical analysis...).
On the other hand, there are situations
in which an overwhelming P-value is needed
for any ensuing decision.
A good discussion of this issue is given in Subsection 35.7 of Kirkwood and
Sterne (2003), which is a basic text aimed mostly at non-mathematicians.
This uses a Bayesian heuristic, based on the well-known result that the
posterior odds between 2 hypotheses after the data analysis is equal to the
prior odds between the same 2 hypotheses multiplied by the likelihood ratio
between the 2 hypotheses. It is argued that a P-value below 0.003 is good
enough for most of the people most of the time, because, *if* the prior
odds are as bad as 100:1 against a nonzero population difference, *and* the
power to detect a difference significant at P<=0.001 is as low as 0.5,
*then* the posterior odds in favour of a nonzero population difference,
given a P-value <=0.001, will be 5:1 in favour.