Where a researcher manipulates their results in some way to obtain a p-value that is deemed significant and hence able to be published. This may be deliberate, or simply the result of poor practice or poor training. One example of p-hacking is to try out many, many different experiments and data analysis methods, and only report those that are significant, meaning that all the non-significant experiments are ignored; a practice called the file drawer effect. The impact of p-hacking is that there may be published results that are pure chance and do not correspond to a real effect; this increases the number of false positives in the literature, hence reducing the validity of the entire discipline.

Defined on page 82

Used on pages 82, 87, 98, 99, 100