Contents
- 21.1 Overview
- 21.2 Introduction
- 21.2.1 Why We Need Explainable AI
- 21.2.2 Is Explainable AI Possible?
- 21.3 An Example -- Query-by-Browsing
- 21.3.1 The Problem
- 21.3.2 A Solution
- 21.3.3 How It Works
- 21.4 Human Explanation -- Sufficient Reason
- 21.5 Local and Global Explanations
- 21.5.1 Decision Trees -- Easier Explanations
- 21.5.2 Black-box -- Sensitivity and Perturbations
- 21.6 Heuristics for Explanation
- 21.6.1 White-box Techniques
- 21.6.2 Black-box Techniques
- 21.6.3 Grey-box Techniques
- 21.7 Summary
Glossary items referenced in this chapter
accuracy, adversarial learning, autonomous car, backpropagation, base rate, bias, big data, black-box machine learning, Boolean network, boundary example, Cambridge Analytica scandal, central example, Chi squared, classification, clustering, common ground, contingency table, database, database query, decision tree, deep neural network, divide and conquer, entropy, expert system, explainable AI, extensional representation, fitness function, forward reasoning, generative adversarial network, genetic algorithm, gig economy, grey-box techniques, Grices's conversational maxims, heuristic evaluation function, hotspot analysis, ID3, image recognition, information visualisation, intentional representation, latent space, LIME, logic, machine learning, neural network, open source, overfitting, penumbra, perturbation techniques, pinch-point layer, probability, Query-by-Browsing, recommender systems, relevance feedback, search engine, self-organising map, sensitivity analysis, SHAP, sigmoid activation function, sigmoid function, similarity matrix, similarity measure, social media, social media bots, sub-symbolic systems, sufficient reason, symbolic systems, threshold, transparency, trust, visualisation, white-box model, white-box techniques, word2vec