An algorithm (in particular machine learning) is said to be transparent if humans can make sense of its internal processes. The opposite to transparency is a black-box model, where the complexity of the model means one simply has to accept its outputs on trust. Transparency may be inherent in the kind of model, such as (small) decision trees or rule-based systems, or due to applying techniques for explainable AI to an otherwise black-box model. When considering machine learning, transparency may apply to:decision rules – whether the final outcome of machine learning is comprehensible; for example a small set of rules (as opposed to, say, a vast number of neural network weights) learning process – whether the algorithm used to detrmine the decision rules is comprehensible; for example a top-down tree-learning algorithm such as ID3 (as opposed to, say, a genetic algorithm).
Used in Chap. 19: page 310; Chap. 20: pages 313, 315, 321; Chap. 21: pages 331, 332, 333; Chap. 23: page 365
Also known as transparent