The Abomination of AI – part 2 – the impact of AI

The obvious impact of AI is in the things it does directly.  Some technologies also change the very nature of society, affecting even those who do not use them.  Cars are an obvious example.  AI is also such a technology.

This is the second of a series of blogs based on my keynote “The abomination of AI” at ICoSCI 2026.  Each has an accompanying segment of the video and slides from the talk as well as detailed notes and references.  Section numbers refer to the full report which will be released in the final blog.   The slide thumbnails in the text correspond to the slides in the navigation panel below.  The presentation can be played below, or opened full screen. The full length video, complete slides and further information can be found at: https://alandix.com/academic/talks/ICOSCI-2026-abomination-of-AI/

Previously …

§1.  Every industry is driven by profits and power, but there is something about the nature of AI itself, which interacts with the nature of market forces in the world that is problematic and is different from other technologies.

§2.  Can any technology be neutral?  AI can be used for good purposes, such as advances in healthcare.  It can also have bad outcomes such as bias in the criminal justice system or online exploitative pornography.  Perhaps most often it is creating the frivolous or even ugly.

 

3.  The Impact of AI

3.1  What AI does

Okay, so these good, bad and ugly/frivolous are things that AI does, the direct application of AI in various areas.

When I design an application using AI, I might use it well or I might use it badly.  This is clearly an important issue when we examine our own use of AI and other people’s use of AI, especially if we are involved in developing AI or developing the user interfaces that employ AI or provide AI for other people.

 

3.2  How AI shapes society

However, with any technology, there’s something that can be more important than what it does.

Some kinds of technology only have an impact where they are used directly.  If I use a nail to connect two pieces of wood, it doesn’t really have a great effect beyond the thing I’m actually constructing.

But some kinds of technology fundamentally reshape the nature of society.  Not every technology does this, but some do, and when this happens, it has a far bigger effect than the direct application of the technology in particular areas.

AI is just such a technology.   When you are using AI for a purpose, you might change your mind and choose to use something else.  When society has been changed by AI, everybody, even those who do not choose to use AI at all, are affected by it.  This is happening now.

 

3.3  How cars have shaped society

Image: By Remi Jouan – Photo taken by Remi Jouan, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=7245143

To help understand this large-scale process, before looking at the societal impact of AI itself,  let’s first look at another technology that has fundamentally reshaped society – the car.

There are positive things cars do. When you get into a car, it does things for you. It helps you get from A to B, keeps you dry, perhaps gives you a sense of independence.

There are also negative things it does. You might have an accident.  If you are not a law-abiding citizen, you might speed, you might, you might drink alcohol or take drugs and then have accidents and injure other people.

These are things we do as an individual with a car.  You may also be indirectly affected if you don’t have a car, for example if you are a pedestrian, you might still be involved in a car accident.  However, by and large these are about things you choose to do.

However, irrespective whether you choose to use cars or not, the whole physical and economic nature of society is shaped by the car and by the internal combustion engine.   Cities have road networks that allow people to get in and out.  This leads to urban sprawl at the edge of the cities along the lines of connection. Because of this organisation, shops and services are placed at car distances away.  So if you don’t have a car (and 84% of the world’s population don’t [MS24].), it becomes difficult to access things.  You find yourself poorer in a sense, more disadvantaged than you would have been because of the actions of other people – car poverty.

Economists talk about externalities, the fact that when I do something, it affects others who aren’t directly doing it [LM02].  The emergence of car poverty is one of the externalities of car use.   Of course there are other externalities like global warming from the petrol engines themselves and pollution [EP19].  Even electric cars produce all sorts of nasty particles from the wear of tyres on the road.

These things are so woven into the fabric of society that is is very hard to break away from them. For example, there have been amazing advances in autonomous vehicles, but really, trying to design a car that drives itself is a bit of a stupid thing to do.  Why not just have, better trains and metros that work far more easily with automation?  But of course, our whole infrastructure is organised around roads and cars.  Therefore, when you want to do something new, you have to fit within it.

This societal structure changes things dramatically, much more than the direct impact.

Coming next …

Part 3 – a different kind of apocalypse
Doomsayers worry about the point when AI becomes sentient, outgrowing its creators.  The real danger is more insidious: the massive financial and human impacts of AI seem almost obscene.

.

 

References

[EP19]  European Parliament (2019). CO2 emissions from cars: facts and figures (infographics). European Parliament. https://www.europarl.europa.eu/news/en/headlines/society/ 20190313STO31218/co2-emissions-from-cars-facts-and-figures-infographics

[LM02] Stan Liebowitz and Stephen Margolis (2002). Network effects and externalities. In The new Palgrave dictionary of economics and the law. Palgrave Macmillan. pp.1329–1333.

[MS24] Miner, P., Smith, B. M., Jani, A., McNeill, G., & Gathorne-Hardy, A. (2024). Car harm: A global review of automobility’s harm to people and the environment. Journal of Transport Geography, 115, 103817.  https://doi.org/10.1016/j.jtrangeo.2024.103817

 

The Abomination of AI – part 1 – setting the scene

AI can be used for good or bad purposes as well as frivolous time wasting!  However, there are also more large-scale impact of AI as it interacts badly with the processes of the global free market simultaneously amplifying the least satisfactory aspects of the free market and at the same time undermining the fundamental assumptions of of market economics.  The resulting runaway effects pose an existential risk to democracy and human dignity.

This is the first of a series of blogs based on my keynote “The abomination of AI” at ICoSCI 2026.  Each has an accompanying segment of the video and slides from the talk as well as detailed notes and references. Section numbers refer to the full report which will be released in the final blog.   The slide thumbnails in the text correspond to the slides in the navigation panel below.  The presentation can be played below, or opened full screen. The full length video, complete slides and further information can be found at: https://alandix.com/academic/talks/ICOSCI-2026-abomination-of-AI/

AI can be used for tremendous good, not least in medicine, as well as frivolous and dangerous uses, such as exploitative online pornography.  However, it also has large scale structural impacts on the very nature of our world.  The levels of financial investment in AI development and the financial and environmental costs of data centres, can seem obscene, especially as climate change and political instability is threatening to tear down the apparent stability of the late 20th century.  AI has intensified some of the feedback effects of digital technology creating unprecedented emergent monopolies, that leave nations as well as individuals feeling all but powerless.  These are huge issues, and ones that countries, including Malaysia, are struggling to cope with.  However, there are also positive actions we can take as researchers and designers to ameliorate some of the problems and in the process create better and more resilient products that really serve people.

1.  Introduction

The word ‘abomination’ is not widely used, and sounds apocalyptic, often with religious connotations.  Here I’m using it in its broader sense of something that is awful to the point of being at the edge of evil.

And that sounds a very strong thing to say about AI itself.  In fact I’m taking more about the AI industry, but not simply the fact that it is an industry governed by profits and power, that is true of many industries such as oil or plastics.  AI is special.  There is something about the nature of AI itself, which interacts with the nature of market forces in the world that is problematic and is different from other technologies.

I’ve touched upon this issue before in other talks and writing, but this is the first time I’ve focused on it centrally.

1.1  Projects and People

The ideas hare are closely related to two projects, one past, one current.  First is Not-Equal (https://not-equal.tech), which was an EPSRC Network Grant finding a programme of work related to the digital economy and social justice [CC25]; I led the algorithmic social justice strand. Clara Crivellaro, who was the overall project lead, and I are in the process of witing a book on Algorithmic Social Justice in the CRC/T&F AI for Everything series.  Then issues in this talk will form part of one of the chapters in this.

Second is an EU Horizon project TANGO (https://tango-horizon.eu/) investigating human machine decision making.  This is very much looking at the ways in which AI can be used more positively in specific systems and decision making situations, including public policy.  However less about the macro-economic issues in this talk.

2.  Neutral Technology?

So there is a sort of a myth that technology is neutral.  As researchers, particularly in university, you do your work and come up with new ideas or technology, but how it’s used is up to other people.  It’s up to the politicians; it’s up to industry – not for us to worry about.  This idea of technology neutrality has been heavily critiqued over the years: saying, “we just gave them the guns, we didn’t pull the trigger”, just doesn’t sound convincing!

Of course there is some truth in the neutrality.  Most technologies can be used in good ways or bad ways, but for some technologies, say nerve poisons, there is clearly some aspects that drive it one way rather than another.

The title ‘abomination of AI’ sounds very negative, but at the scale of individual applications of technology, is certainly not like nerve poison!  AI can be used in good ways and bad ways, just like pretty much any technology.  So while, this talk is focusing on certain intrinsic dangers of AI, I certainly don’t think everything about AI is bad, otherwise I wouldn’t be writing textbooks about it.

The dangers I’ll be highlighting are at a macroeconomic scale, and are pretty negative, so after discussing these we’ll return to some of the constructive things that you can do within your discipline or work to help ameliorate some of the bad things.

Before that, let’s look at the smaller scale of individual applications of AI, good, bad and …

 

2.1  The Good – health and UX

Images: [NF24],  CSBIOPASSION, CC BY-SA 4.0
<https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons.  https://commons.wikimedia.org/wiki/File:C12orf29_AlphaFold.png

There are clearly some wonderful things being achieved with AI, not least some of the amazing advances in medicine and health that have been happening because of AI.  You may recall the 2024 Nobel Prize for chemistry was shared between a chemist and two AI researchers [NF24]; the latter for their role in developing AlphaFold which has revolutionised protein synthesis  [JE21].

Closer to home, in my book AI for HCI  [Dx26b] I look at the ways AI can help in user interface design and creating better computer systems for people

 

 

2.2  The Bad

Bias and discrimination

Paper: [Dx92]

Back in 1992, I first wrote about the dangers of ethnic, gender and social bias in particular in black box machine learning algorithms [Dx92].  To be honest, at that point, I thought it was going become a real issue in the next few years.  However, that was just before the big AI winter, so in fact, it got put off for 25 years or so.

Paper: [Dx92] Images: [Da21,Gl21,Ma21,Bu21]

But now, of course,  bias is a really critical issue often in the press, including problems with facial recognition systems : [Da21,Gl21,Ma21,Bu21].  In the US court system there is extensive controversy about the use of systems that recommend whether you give people parole or not [AL16,LM16].

 

Online exploitative pornography

Images: [CH26,MC26]

Another issue that has been hot in the press is the use of online platforms to produce exploitative pornography using AI.  While the UK was still wringing its hands deciding what to do, Malaysia and Indonesia led the world banning Grok [CH26,MC26].  .  Even for a country, standing up to industries as big as X and Elon Musk is no small thing. In fact Musk did partially backtracked on Grok, and while still limited, it does show that the global steamroller of AI is not inevitable.

 

2.3  The Ugly … or simply frivolous

Image: [Wa24]

So there are some really good uses of AI and some bad ones, but for the general public, the majority, while not always ugly are at best frivolous.  The world is filled with images of cats on skateboards, cats dancing, albeit not all as ugly as the Chubby TikTok craze [Wa24]!  You have almost certainly seen some AI generated cat images or videos, and they are often quite sweet, like cartoons emphasising the things we find appealing – large-eyed cuddly pets doing cute things.

This is not bad, it’s just frivolous.  And frivolous can be good, indeed fun is important for a full life and has been studied in HCI [BM18] including my own work on Christmas Crackers [Dx18]. We pay to go to the circus, watch a comedy film or buy a toy for a child.  But maybe there is a point when the sheer volume and cost of frivolity is excessive?

Coming next …

Part 2 – the impact of AI

The obvious impact of AI is in the things it does directly.  Some technologies also change the very nature of society, affecting even those who do not use them.  Cars are an obvious example.  AI is such a technology.

 

References

[AL16] Angwin, J., Larson, J., Mattu, S. and Kirchner, L. (2016). Machine bias there’s software used across the country to predict future criminals: And it’s biased against blacks. ProPublica (23 May 2016). https://www.propublica.org/article/machine-bias-risk-assessments-incriminal-sentencing.

[BM18] Mark Blythe and Andrew Monk (2018).  Funology 2: Critique, ideation and directions.” Funology 2: From Usability to Enjoyment. Cham: Springer.

[Bu21] Sarah Butler (2021). Uber facing new UK driver claims of racial discrimination. The Guardian, 6 Oct 2021. https://www.theguardian.com/technology/2021/oct/06/uber-facing-new-uk-driver-claims-of-racial-discrimination

[CH26] Osmond Chia and Silvano Hajid (2026). Malaysia and Indonesia block Musk’s Grok over explicit deepfakes. BBC News. 12 January 2026. https://www.bbc.co.uk/news/articles/cg7y10xm4x2o

[CC25] Clara Crivellaros, Lizzie Coles-Kemp, Alan Dix, and Ann Light (2025). Co-creating conditions for social justice in digital societies: modes of resistance in HCI collaborative endeavors and evolving socio-technical landscapes. ACM Transactions on Computer-Human Interaction. Vol. 32(2), Article No:15, pp.1–40  https://doi.org/10.1145/3711840

[Da21] Nicola Davis (2021).  From oximeters to AI, where bias in medical devices may lurk. The Guardian, 21 Nov 2021. https://www.theguardian.com/society/2021/nov/21/from-oximeters-to-ai-where-bias-in-medical-devices-may-lurk

[Dx92] A. Dix (1992).  Human issues in the use of pattern recognition techniques. In Neural Networks and Pattern Recognition in Human Computer Interaction Eds. R. Beale and J. Finlay. Ellis Horwood. 429-451.  https://alandix.com/academic/papers/neuro92/

[Dx18] A. Dix (2018). Deconstructing Experience: Pulling Crackers Apart. In: Blythe, M., Monk, A. (eds) Funology 2. Human–Computer Interaction Series. Springer, Cham. https://doi.org/10.1007/978-3-319-68213-6_29

[Dx26b] A. Dix. (2026). AI for Human–Computer Interaction. CRC Press. (in press). https://alandix.com/ai4hci/

[Gl21] Jessica Glenza (2021). Minneapolis poised to ban facial recognition for police use. The Guardian, 12 Feb 2021. https://www.theguardian.com/us-news/2021/feb/12/minneapolis-police-facial-recognition-software

[JE21]  Jumper, J., Evans, R., et al. Highly accurate protein structure prediction with AlphaFold. Nature 596, 583–589 (2021). https://doi.org/10.1038/s41586-021-03819-2

[LM16] Larson, J., Mattu, S., Kirchner, L. and Angwin, J. (2016). How we analyzed the COMPAS recidivism algorithm. ProPublica, 23 May 2016. https://www.propublica.org/article/how-weanalyzed-the-compas-recidivism-algorithm

[Ma21] Jyoti Madhusoodanan (2021). These apps say they can detect cancer. But are they only for white people?  The Guardian,  28 Aug 2021. https://www.theguardian.com/us-news/2021/aug/28/ai-apps-skin-cancer-algorithms-darker

[MC26] Liv McMahon and Laura Cress (2026). X could face UK ban over deepfakes, minister says. BBC News 9 January 2026. https://www.bbc.co.uk/news/articles/c99kn52nx9do

[NF24]  The Nobel Foundation (2024). The Nobel Prize in Chemistry 2024. NobelPrize.org. Nobel Prize Outreach 2025. Sat. 17 May 2025.  https://www.nobelprize.org/prizes/chemistry/2024/summary/

[Wa24] Aidan Walker (2024). The unstoppable rise of Chubby: Why TikTok’s AI-generated cat could be the future of the internet. BBC, 20th August 2024.  https://www.bbc.co.uk/future/article/20240819-why-these-ai-cat-videos-may-be-the-internets-future