Cheat Mastermind and Explainable AI

How a child’s puzzle game gives insight into more human-like explanations of AI decisions

Many of you will have played Mastermind, the simple board game with coloured pegs where you have to guess a hidden pattern.  At each turn the person with the hidden pattern scores the challenge until the challenger finds the exact colours and arrangement.

As a child I imagined a variant, “Cheat Mastermind” where the hider was allowed to change the hidden pegs mid-game so long as the new arrangement is consistent with all the scores given so far.

This variant gives the hider a more strategic role, but also changes the mathematical nature of the game.  In particular, if the hider is good at their job, it makes it a worst case for the challenger if they adopt a minimax strategy.

More recently, as part of the TANGO project on hybrid human-AI decision making, we realised that the game can be used to illustrate a key requirement for explainable AI (XAI).  Nick Chater and Simon Myers at Warwick have been looking at theories of human-to-human explanations and highlighted the importance of coherence, the need for consistency between explanations we give for a decision now and future decisions.  If I explain a food choice by saying “I prefer sausages to poultry“, you would expect me to subsequently choose sausages if given a choice.

Cheat Mastermind captures this need to make our present decisions consistent with those in the past.  Of course in the simplified world of puzzles this is a perfect match, but in real world decisions things are more complex.  Our explanations are often ‘local’ in the sense they are about a decision in a particular context, but still, if future decisions disagree wit earlier explanations, we need to be able to give a reason for the exception: “turkey dinners at Christmas are traditional“.

Machine learning systems and AI offer various forms of explanation for their decisions or classifications.  In some cases it may be a nearby example from training data, in some cases a heat map of areas of an image that were most important in making a classification, or in others an explicit rule that applies locally (in the sense of ‘nearly the same data).  The way these are framed initially is very formal, although they may be expressed in more humanly understandable visualisations.

Crucially, because these start in the computer, most can be checked or even executed (in the case of rules) by the computer.  This offers several possible strategies for ensuring future consistency or at least dealing with inconsistency … all very like human ones.

  1. highlight inconsistency with previous explanations: “I know I said X before, but this is a different kind of situation”
  2. explain inconsistency with previous explanations: “I know I said X before, but this is different because of Y”
  3. constrain consistency with previous explanations by adding the previous explanation “X” as a constraint when making future decisions. This may only be possible with some kinds of machine learning algorithms.
  4. ensure consistency by using the previous explanation “X” as the decision rule when the current situation is sufficiently close; that is completely bypass the original AI system.

The last mimics a crucial aspect of human reasoning: by being forced to reflect on our unconscious (type 1) decisions, we create explicit understanding and then may use this in more conscious rational (type 2) decision making in the future.

Of course, strategy 3 is precisely Cheat Mastermind.

 

 

 

Free AI book and a new one coming …

Yes a new AI book is coming … but until then you can download the first edition for FREE 🙂

Many years ago Janet Finlay and I wrote a small introduction to artificial intelligence.  At the time there were several Bible-sized tomes … some of which are still the standard textbooks today.  However, Janet was teaching a masters conversion course and found that none of these books were suitable for taking the first steps on an AI journey, especially for those coming from non-computing disciplines.

Over the years it faded to the back of our memories, with the brief exception of the time when, after we’d nearly forgotten it, CRC Press issued a Japanese translation.  Once or twice the thought of doing an update arose, but quickly passed.  This was partly because our main foci were elsewhere, but also, at the danger of insulting all my core-AI friends, not much changed in core AI for many years!

Coming soon … Second Edition

Of course over recent years things have changed dramatically, hence my decision, nearly 25 years on, to create a new edition maintaining the aim to give a rich but accessible introduction, but capturing some of the recent trends and giving these a practical and human edge.  Following the T-model of teaching, I’d like to help both newcomer and expert gain a broad perspective of the issues and landscape, whilst giving enough detail for those that want to delve into a more specific area.

A Free Book and New Resources

In the mean time the publisher, Taylor & Francis/CRC has agreed to make the PDF of the first edition available free of charge  I have updated some of the code examples from the first edition and will be incrementally adding new material to the second edition micro-site including slides, cases studies, video and interactive materials.  If you’d like to teach using this please let me know your views on the topics and also if there are areas where you’d like me to create preliminary material with greater urgency.  I won’t promise to be able to satisfy everyone, but can use this to adjust my priorities.

Why now?

The first phase of change in AI was driven by the rise of big data and the increasing use of forms of machine learning to drive adverts, search results and social media.  Within user interface design, many of the fine details of colour choices and screen layout are now performed using A–B testing …sight variants of interfaces delivered to millions of people – shallow, without understanding and arguably little more than bean counting, but in numerous areas vast data volume has been found to be ‘unreasonably effective‘ at solving problems that were previously seen to be the remit of deep AI.

In the last few years deep learning has taken over as the driver of AI research and often also media hype.  Here it has been the sheer power of computation, partly due to Moores’ Law with computation nearly a million times faster than it was when that first edition was written nearly 25 years ago.  However, it has also been enabled by cloud computing allowing large numbers of computers ti efficiently attack a single problem.  Algorithms that might have been conceived of but dismissed as impractical in the past have become commonplace.

Alongside this has been a dark side of AI, from automated weapons and mass surveillance, to election rigging and the insidious knowledge that large corporations have gathered through our day-to-day web interactions.  In the early 1990s I warned of the potential danger of ethnic and gender bias in black-box machine learning and I’ve returned to this issue more recently as those early predictions have come to pass.

Across the world there are new courses running or being planned and people who want to know more.  In Swansea we have a PhD programme on people-first AI/big data, and there is currently a SIGCHIItaly workshop call out for Teaching HCI for AI: Co-design of a Syllabus. There are several substantial textbooks that offer copious technical detail, but can be inaccessible for the newcomer or those coming from other disciplines.  There are also a number of excellent books that deal with the social and human impact of AI, but without talking about how it works.

I hope to be able to build upon the foundations that Janet and I established all those years ago to create something that fills a crucial gap: giving a human-edge to those learning artificial intelligence from a computing background and offering an accessible technical introduction for those approaching the topic from other disciplines.