It has happened to us all; you are in the middle of doing something and the screen suddenly switches apparently randomly. If you are experienced you think, “I must have touched something by accident” or, “perhaps the page timed out”, maybe you try the back button on a web page or on an Android phone, or simply back-off and try again. Sometimes this is just disorienting and mildly annoying, sometimes you lost the text you spent half-an-hour carefully crafting. Now imagine you are less confident, perhaps older or less experienced, or have motor control issues either through illness, disability or due to an activity such as running or driving.
Things may go well: you have just managed to create a new payee on our banking app, and you wonder, “how did I do that?”. Or you’ve been in the middle of something, got interrupted by a notification or phone call and then think “where was I?”.
AI can make this worse acting autonomously or because a side effect of making the interpretation of our actions more intelligent is that they are often less predictable and harder to make sense of. For AI we are used to thinking about issues of explainability (Guidotti et al.); indeed, the author has been writing about the importance of transparency to ameliorate machine-learning bias since the early 1990s (Dix 1992), and the ‘right to an explanation’ is enshrined within EU law (EU 2024).
However, AI is simply adding to existing UI complexity. With or without AI, we need to be able to ask “what happened?”, “what did I just do?”, “how can I do this again next time?”.
We need explainable user interfaces.