The likelihood that the data we have measured, or the result we have obtained, could have arisen by chance given the null hypothesis. Note that because this is related to the probability of making an error, big p-values are bad and small p-values are good.
The null hypothesis is often some form of no effect, such as there being no improvement due to a new system being used. P-values are often quoted at specific levels such as 5%, 1% or 0.1% (p<0.05, p<0.01, or p<0.001 respectively), but this is largely because they were difficult to calculate in the past and hence one had to look them up in a book of statistical tables where they had been (laboriously) precalculated. Recent advice from the Royal Statistical Society is to quote the exact p-value as a measure of evidence, rather than rely on magic values.
Note also that even if there is no effect, a p-value of 5% is likely to occur one time in twenty and a p-value of 1% one time in a hundred, that is one may occasionally get a false positive result, or Type I error. A low p-value means that a false positive is less likely.
Used on pages 4, 12, 60, 66, 76, 77, 79, 84, 86, 100, 101, 102, 130, 134, 143
Also known as p-level
Links:
- Wikipedia: p-value
- GraphPad: What is a P value?