One place where tiny p-values are important is in multiple-comparison tests,
where you're applying some sort of Bonferroni-like correction or venturing
into False Discovery rates and the like. If you stack enough tests on top
of one another in a given analysis, its likely you'll meet that p-value
cutoff of significance purely by chance...
This is true if "that p-value cutoff of significance" is from an
authority-based prior odds, as used in the Kirkwood-Sterne heuristic that I
quoted. It is not true if "that p-value cutoff of significance" is a
corrected cutoff, arising either from the Bonferroni correction or from a
false discovery rate procedure. However, John is right to point out that
arbitrarily tiny p-values can be important in such circumstances. Some
documents on multiple-test procedures, false discovery rates and their
implementation in Stata can be downloaded either from my website (see my
signature), using either a browser or the Stata -net- command.