explainable AI

Terms from Artificial Intelligence: humans at the heart of algorithms

Explainable AI is the term used to refer to the overall aim of making the decisions and outputs of AI more understandable, and to the number of technologies, that can help acheive this. Many forms of machine learning create large representations, such as the weights in neural networks, which are very difficult to understand. In addition, AI has been shown to exhibit bias and the effects of errors in AI can be life threatening, for example, in an autonomous vehicle. This has led to the desire for systems that are more transparent or comprehensible, and legislation, such as the European General Data Protection Regulation, that requires companies to provide explanations of critical decisions. This may be acheived by favouring machine learning techniques, such as decision trees, that are inherently more explainable. Alternatively various methods have been developed to make black-box models more transparent, often using peturbation techniques such as SHAP or LIME.

Defined on page 514

Used on pages 4, 11, 178, 336, 441, 451, 482, 485, 493, 513, 514, 515, 529, 567, 572

Also known as explainable, explainability