The Abomination of AI – part 1 – setting the scene

AI can be used for good or bad purposes as well as frivolous time wasting!  However, there are also more large-scale impact of AI as it interacts badly with the processes of the global free market simultaneously amplifying the least satisfactory aspects of the free market and at the same time undermining the fundamental assumptions of of market economics.  The resulting runaway effects pose an existential risk to democracy and human dignity.

This is the first of a series of blogs based on my keynote “The abomination of AI” at ICoSCI 2026.  Each has an accompanying segment of the video and slides from the talk as well as detailed notes and references. Section numbers refer to the full report which will be released in the final blog.   The slide thumbnails in the text correspond to the slides in the navigation panel below.  The presentation can be played below, or opened full screen. The full length video, complete slides and further information can be found at: https://alandix.com/academic/talks/ICOSCI-2026-abomination-of-AI/

AI can be used for tremendous good, not least in medicine, as well as frivolous and dangerous uses, such as exploitative online pornography.  However, it also has large scale structural impacts on the very nature of our world.  The levels of financial investment in AI development and the financial and environmental costs of data centres, can seem obscene, especially as climate change and political instability is threatening to tear down the apparent stability of the late 20th century.  AI has intensified some of the feedback effects of digital technology creating unprecedented emergent monopolies, that leave nations as well as individuals feeling all but powerless.  These are huge issues, and ones that countries, including Malaysia, are struggling to cope with.  However, there are also positive actions we can take as researchers and designers to ameliorate some of the problems and in the process create better and more resilient products that really serve people.

1.  Introduction

The word ‘abomination’ is not widely used, and sounds apocalyptic, often with religious connotations.  Here I’m using it in its broader sense of something that is awful to the point of being at the edge of evil.

And that sounds a very strong thing to say about AI itself.  In fact I’m taking more about the AI industry, but not simply the fact that it is an industry governed by profits and power, that is true of many industries such as oil or plastics.  AI is special.  There is something about the nature of AI itself, which interacts with the nature of market forces in the world that is problematic and is different from other technologies.

I’ve touched upon this issue before in other talks and writing, but this is the first time I’ve focused on it centrally.

1.1  Projects and People

The ideas hare are closely related to two projects, one past, one current.  First is Not-Equal (https://not-equal.tech), which was an EPSRC Network Grant finding a programme of work related to the digital economy and social justice [CC25]; I led the algorithmic social justice strand. Clara Crivellaro, who was the overall project lead, and I are in the process of witing a book on Algorithmic Social Justice in the CRC/T&F AI for Everything series.  Then issues in this talk will form part of one of the chapters in this.

Second is an EU Horizon project TANGO (https://tango-horizon.eu/) investigating human machine decision making.  This is very much looking at the ways in which AI can be used more positively in specific systems and decision making situations, including public policy.  However less about the macro-economic issues in this talk.

2.  Neutral Technology?

So there is a sort of a myth that technology is neutral.  As researchers, particularly in university, you do your work and come up with new ideas or technology, but how it’s used is up to other people.  It’s up to the politicians; it’s up to industry – not for us to worry about.  This idea of technology neutrality has been heavily critiqued over the years: saying, “we just gave them the guns, we didn’t pull the trigger”, just doesn’t sound convincing!

Of course there is some truth in the neutrality.  Most technologies can be used in good ways or bad ways, but for some technologies, say nerve poisons, there is clearly some aspects that drive it one way rather than another.

The title ‘abomination of AI’ sounds very negative, but at the scale of individual applications of technology, is certainly not like nerve poison!  AI can be used in good ways and bad ways, just like pretty much any technology.  So while, this talk is focusing on certain intrinsic dangers of AI, I certainly don’t think everything about AI is bad, otherwise I wouldn’t be writing textbooks about it.

The dangers I’ll be highlighting are at a macroeconomic scale, and are pretty negative, so after discussing these we’ll return to some of the constructive things that you can do within your discipline or work to help ameliorate some of the bad things.

Before that, let’s look at the smaller scale of individual applications of AI, good, bad and …

 

2.1  The Good – health and UX

Images: [NF24],  CSBIOPASSION, CC BY-SA 4.0
<https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons.  https://commons.wikimedia.org/wiki/File:C12orf29_AlphaFold.png

There are clearly some wonderful things being achieved with AI, not least some of the amazing advances in medicine and health that have been happening because of AI.  You may recall the 2024 Nobel Prize for chemistry was shared between a chemist and two AI researchers [NF24]; the latter for their role in developing AlphaFold which has revolutionised protein synthesis  [JE21].

Closer to home, in my book AI for HCI  [Dx26b] I look at the ways AI can help in user interface design and creating better computer systems for people

 

 

2.2  The Bad

Bias and discrimination

Paper: [Dx92]

Back in 1992, I first wrote about the dangers of ethnic, gender and social bias in particular in black box machine learning algorithms [Dx92].  To be honest, at that point, I thought it was going become a real issue in the next few years.  However, that was just before the big AI winter, so in fact, it got put off for 25 years or so.

Paper: [Dx92] Images: [Da21,Gl21,Ma21,Bu21]

But now, of course,  bias is a really critical issue often in the press, including problems with facial recognition systems : [Da21,Gl21,Ma21,Bu21].  In the US court system there is extensive controversy about the use of systems that recommend whether you give people parole or not [AL16,LM16].

 

Online exploitative pornography

Images: [CH26,MC26]

Another issue that has been hot in the press is the use of online platforms to produce exploitative pornography using AI.  While the UK was still wringing its hands deciding what to do, Malaysia and Indonesia led the world banning Grok [CH26,MC26].  .  Even for a country, standing up to industries as big as X and Elon Musk is no small thing. In fact Musk did partially backtracked on Grok, and while still limited, it does show that the global steamroller of AI is not inevitable.

 

2.3  The Ugly … or simply frivolous

Image: [Wa24]

So there are some really good uses of AI and some bad ones, but for the general public, the majority, while not always ugly are at best frivolous.  The world is filled with images of cats on skateboards, cats dancing, albeit not all as ugly as the Chubby TikTok craze [Wa24]!  You have almost certainly seen some AI generated cat images or videos, and they are often quite sweet, like cartoons emphasising the things we find appealing – large-eyed cuddly pets doing cute things.

This is not bad, it’s just frivolous.  And frivolous can be good, indeed fun is important for a full life and has been studied in HCI [BM18] including my own work on Christmas Crackers [Dx18]. We pay to go to the circus, watch a comedy film or buy a toy for a child.  But maybe there is a point when the sheer volume and cost of frivolity is excessive?

Coming next …

Part 2 – the impact of AI

The obvious impact of AI is in the things it does directly.  Some technologies also change the very nature of society, affecting even those who do not use them.  Cars are an obvious example.  AI is such a technology.

 

References

[AL16] Angwin, J., Larson, J., Mattu, S. and Kirchner, L. (2016). Machine bias there’s software used across the country to predict future criminals: And it’s biased against blacks. ProPublica (23 May 2016). https://www.propublica.org/article/machine-bias-risk-assessments-incriminal-sentencing.

[BM18] Mark Blythe and Andrew Monk (2018).  Funology 2: Critique, ideation and directions.” Funology 2: From Usability to Enjoyment. Cham: Springer.

[Bu21] Sarah Butler (2021). Uber facing new UK driver claims of racial discrimination. The Guardian, 6 Oct 2021. https://www.theguardian.com/technology/2021/oct/06/uber-facing-new-uk-driver-claims-of-racial-discrimination

[CH26] Osmond Chia and Silvano Hajid (2026). Malaysia and Indonesia block Musk’s Grok over explicit deepfakes. BBC News. 12 January 2026. https://www.bbc.co.uk/news/articles/cg7y10xm4x2o

[CC25] Clara Crivellaros, Lizzie Coles-Kemp, Alan Dix, and Ann Light (2025). Co-creating conditions for social justice in digital societies: modes of resistance in HCI collaborative endeavors and evolving socio-technical landscapes. ACM Transactions on Computer-Human Interaction. Vol. 32(2), Article No:15, pp.1–40  https://doi.org/10.1145/3711840

[Da21] Nicola Davis (2021).  From oximeters to AI, where bias in medical devices may lurk. The Guardian, 21 Nov 2021. https://www.theguardian.com/society/2021/nov/21/from-oximeters-to-ai-where-bias-in-medical-devices-may-lurk

[Dx92] A. Dix (1992).  Human issues in the use of pattern recognition techniques. In Neural Networks and Pattern Recognition in Human Computer Interaction Eds. R. Beale and J. Finlay. Ellis Horwood. 429-451.  https://alandix.com/academic/papers/neuro92/

[Dx18] A. Dix (2018). Deconstructing Experience: Pulling Crackers Apart. In: Blythe, M., Monk, A. (eds) Funology 2. Human–Computer Interaction Series. Springer, Cham. https://doi.org/10.1007/978-3-319-68213-6_29

[Dx26b] A. Dix. (2026). AI for Human–Computer Interaction. CRC Press. (in press). https://alandix.com/ai4hci/

[Gl21] Jessica Glenza (2021). Minneapolis poised to ban facial recognition for police use. The Guardian, 12 Feb 2021. https://www.theguardian.com/us-news/2021/feb/12/minneapolis-police-facial-recognition-software

[JE21]  Jumper, J., Evans, R., et al. Highly accurate protein structure prediction with AlphaFold. Nature 596, 583–589 (2021). https://doi.org/10.1038/s41586-021-03819-2

[LM16] Larson, J., Mattu, S., Kirchner, L. and Angwin, J. (2016). How we analyzed the COMPAS recidivism algorithm. ProPublica, 23 May 2016. https://www.propublica.org/article/how-weanalyzed-the-compas-recidivism-algorithm

[Ma21] Jyoti Madhusoodanan (2021). These apps say they can detect cancer. But are they only for white people?  The Guardian,  28 Aug 2021. https://www.theguardian.com/us-news/2021/aug/28/ai-apps-skin-cancer-algorithms-darker

[MC26] Liv McMahon and Laura Cress (2026). X could face UK ban over deepfakes, minister says. BBC News 9 January 2026. https://www.bbc.co.uk/news/articles/c99kn52nx9do

[NF24]  The Nobel Foundation (2024). The Nobel Prize in Chemistry 2024. NobelPrize.org. Nobel Prize Outreach 2025. Sat. 17 May 2025.  https://www.nobelprize.org/prizes/chemistry/2024/summary/

[Wa24] Aidan Walker (2024). The unstoppable rise of Chubby: Why TikTok’s AI-generated cat could be the future of the internet. BBC, 20th August 2024.  https://www.bbc.co.uk/future/article/20240819-why-these-ai-cat-videos-may-be-the-internets-future