This microsite outlines a number of heuristics for developing explainable AI systems. Early versions of this were distributed privately under the title “AIX Kitbag” (AI explainability) before the term XAI had become established for explainable AI. The current pages are based on the document produced in 2018 for a keynote entitled “Sufficient Reason” at HCD for Intelligent Environments, in Belfast, 3rd July 2018.
The techniques are roughly classed into white-box, black-box and grey-box techniques, but this is not a hard-and-fast distinction as most techniques have a hybrid nature. This high-level breakdown is also being used in “Artificial Intelligence: humans at the heart of algorithms” (CRC/Taylor&Francis, in press) alongside the local/global explainability dimension, which is effectively orthogonal (in that the heuristics can be applied locally or globally).
Several of the heuristics are inspired by human-expert knowledge elicitation/externalisation techniques, and what counts as an acceptable ‘explanation’ in human–human conversation. For this, we do not expect to be able to dump the neural assembly of our brains, but still manage to create what we regard as acceptable levels of model or reasoning.
The focus here is on creating more humanly comprehensible ways to represent the behaviour of machine learning, but as these typically involve some level of data reduction or simplification, it is possible that they may also be useful in creating more robust machine representations.
Heuristics:
- white-box – using knowledge of the internal structure of specific algorithms
- black-box – using only external behaviour
- grey-box – pivoting on an internal layer