Sensitivity analysis is about trying to assess how much effect small changes in parameters or data values have on the ultimate result. If a small change in an uncertain or random value will create a large change in the result, then that will reduce one's confidence in the ultimate result. We may perform sensitivity analysis by making perturbations (small changes) in values, either mathematically calculating the impact or simply re-running the analysis with the perturbed values – if the small perturbation has made a big change, we know we need to be careful. This is sometimes also called perturbation analysis.

One example of this is in calculating different measures of variation. We know (mathematically) that outliers have a large impact on the standard deviation (s.d., σ): it is very sensitive; in contrast the interquartile range is not affected at all by a single outlier: it is robust. Another example is found in Bayesian statistics, where the choice of prior distribution affects the posterior. One could therefore in principle re-work the analysis with slightly different priors ... although in practice this is rarely done.

Context: Similarly, while Bayesian statistics demands a precise prior probability distribution, in practice often uniform or other forms of very ‘spread’ priors are used, reflecting the high degree of uncertainty. Ideally, it would be good to try a number of priors to obtain a form of sensitivity analysis, rather as we did in the example in Chapter 7, Section 7.3.1, but I have not seen this done in practice, possibly because it would add another level of interpretation to explain!

Used on page 98

Also known as perturbation analysis