The Abomination of AI – part 8 – summary and recap

This final post recaps what we’ve learnt about the runaway nature of the AI industry, how it undermines free markets, and how we can make a difference. The core question is not what can AI do, but what should AI do?

This is the last of the series of blogs based on my keynote “The abomination of AI” at ICoSCI 2026.  Each has an accompanying segment of the video and slides from the talk as well as detailed notes and references.  Section numbers refer to the full report which will be released in the final blog.   The slide thumbnails in the text correspond to the slides in the navigation panel below.  The presentation can be played below, or opened full screen. The full length video, complete slides and further information can be found at: https://alandix.com/academic/talks/ICOSCI-2026-abomination-of-AI/

Previously …

§1.  Every industry is driven by profits and power, but there is something about the nature of AI itself, which interacts with the nature of market forces in the world that is problematic and is different from other technologies.

§2.  Can any technology be neutral?  AI can be used for good purposes, such as advances in healthcare.  It can also have bad outcomes such as bias in the criminal justice system or online exploitative pornography.  Perhaps most often it is creating the frivolous or even ugly.

§3.  The obvious impact of AI is in the things it does directly. Some technologies also change the very nature of society, affecting even those who do not use them. Cars are an obvious example.  AI is also such a technology.

§4.  Doomsayers worry about the point when AI becomes sentient, outgrowing its creators.  The real danger is more insidious: the massive financial and human impacts of AI seem almost obscene.

§5.   Network externalities, the way one person’s use of AI and digital tech changes its value for others, creates positive feedback loops, leading to runaway growth and emergent monopolies, the nemesis of free markets. This the very nature of digital technology and AI breaks free markets leading to runaway inequality, even with the best intentions of industry … but some tech companies further exploit these effects.

§6.   Runaway growth of AI is not painless – opportunity costs of investment and human costs of lost jobs.  Gains may be transitory – buy-now-pay-later tech risk tying users into spiralling costs.

§7.   It all seems too big, requiring national and international responses.  But we can make a difference using appropriately chosen small AI (including none). Plus, this good use of AI is good for business too.

8.  In summary

So, in summary,  AI can do amazing good things, but often also bad things.

More crucial is how AI is shaping society.  We have to think explicitly about this, because AI has its own dynamic.  That dynamic is not good by its nature, so we have to control it to make it serve society. Some of this needs action at a governmental and intergovernmental level, because this is so big compared to most countries.

However there are things you can do.  You can choose to sometimes not use AI or use small AI, but always to try to use AI appropriately rather than just throwing it at a problem and wiping your hands of the wider impact.

The core question is not to think solely about what can AI do?  While it still does some things frighteningly badly, AI is getting better and better at doing more and more.

But the big question, the Big, BIG question is what should AI do?

And that’s the question we need to ask ourselves continually both in our individual work and at societal level.

AI will be an abomination, but only if we let it be.

Updates

It is now four months since I gave the talk ICoSCI 2026 on which this blog series has been based.  In that short time there have been many changes, some strengthening the arguments and some challenging them.   In addition, I’ve had helpful feedback from several people, especially extensive comments by Mark Bernstein; so many thanks to Mark and others who have engaged with this series.

I’ve written updates at the end of several blogs.  In part 3 “A different kind of apocalypse” recent reports of agentic AI ignoring guardrails make Terminator-style AI devastation, seem less distant.  Of course, since those updates the publicity around Claude Mythos’ ability to find bugs in established codebases meant that Anthropic deemed it too dangerous to release without allowing selected partners to use it first to check their own security [An26a].  While this may be in part a PR exercise, it is being taken seriously by government and pan-government organisations [AISI26,Go26].  Furthermore, as well as these external threats, Anthropic are also monitoring for the potential of AI ‘sabotage’, that is:

when an AI model with access to powerful affordances within an organization uses its affordances to autonomously exploit, manipulate, or tamper with that organization’s systems or decision-making in a way that raises the risk of future catastrophic outcomes”  [An26b]

Updates at the end of part 6 “should we worry?“ reinforce the difficulty of switching AI models and the way OpenClaw has emphasised the under-pricing of AI use plans, and hence the way that these might adjust (upwards!) over time, just as we’ve seen with other forms of digital technology.  At the end of part 7 “what can we do?” there is a lovely example of really smart AI, combining AI and plain old computing to achieve better and cheaper outcomes.

In addition to these updates, recent developments (since the updates!) and comments have raised a couple of issues that I’d like to address.

Size matters

Since giving the talk in January, there have been further developments in reducing the costs of AI, not least DeepSeek’s V4 release, which has focused on making model training and execution more efficient [DS26].  Nvidia have released open source models designed to run on local small-scale (as in not more than a few $1000 Nvidia chips) installations [Ca26,Br26], some of these are designed for specialised applications, but some more general purpose; it could be that they are positioning themselves to spread their market beyond the small number of AI software mega-corporations, but in the process may weaken the emergent monopolies of these software players.  However Nvidia’s own near monopoly AI hardware position is being challenged by DeepSeek’s use of Huawei chips [CC26].

This seems to suggest several potential scenarios:

  1. The big players (OpenAI, Anthropic) see off the cheaper, but less powerful alternatives, maybe using market dominance to retain near monopoly positions as discussed in section 5 (blog part 4 and part 5)
  2. The lean, mean models become powerful enough to open up the market fully, so that the current mega companies and their investors lose their ‘bet’ on market dominance, leading to massive drops in their market values, and potentially a major stock exchange crash.
  3. The cheaper models become viable alternatives, but do not immediately compete on shear power and corporate commitment leaving the mega AI corporations strong, but less all encompassing, and making the individual solution strategies in part 7 easier.

The impact of (2) on the global economy would be pretty disastrous, especially following the massive hits of US-Israel/Iran and Ukraine/Russia wars, so, on balance, the softer movement of scenario (3) feels like the best outcome.

Perversely, the big AI companies would be likely to weather the storm of (2) as the investment already committed provides a cash buffer, in the same way that tech companies during the dot-com period which with second/third round investment before the crash often survived, including Lastminute.com, the IPO if which triggered the market re-evaluation of tech in 2000.  The founders and early funders will see paper devaluations, but otherwise still be in control of huge businesses; the smaller, more recent investors will lose however, including many global pension funds.

Cats and Consummate Consumerism

Section 3 (blog part 2) is, in part, quite dismissive of the vast volume of ‘frivolous’ use of generative AI.  Later, I hope that is clarified (e.g. the example of the Doctor’s Kitchen app) that this does not mean criticising all personal use of AI — appropriate use of AI in can be very beneficial, in particular allowing far more individualised access to digital technology.  Indeed, LLMs are already democratising access to many forms of professional advice that are beyond the reach of individuals and small businesses [Fu26].

However, that does leave the cats.   If that is what people want to create and view, surely that is their business?

In some ways these uses of AI are the ultimate form of consumerism — like the boxfuls of unused plastic toys, kitchen appliances that lie in the dark recess of cupboards, the 1.6 billion items of clothing in UK wardrobes that have never been worn [BBC22] — but now all digital, thrust before us by the relentless algorithms of social media.  Items we never knew we wanted instantly become essential, produced apparently for free and provided in precisely the quantity and kind that makes us want more.

Is this a choice, when the algorithms know how to nudge and channel us [HS26], where LLMs have learnt the lessons of the confidence trickster, and where the content itself is addictive [KK25]?  Is this a free market equivalent of Opium Wars?

For individuals many of the costs are effectively hidden, especially at the point of use.  Just as no fleece wearer or takeaway coffee drinker deliberately chooses to put microplastics in breast milk, the environmental and social impacts of digital and AI products are often physically and temporally distant and in many cases suffered by others [Ma24].

This distancing is in part due to digital communication and in part the diffuse relationship between the loci of production and use, especially when a large proportion of cost is in training.  However, the distancing is in part deliberate, not least the under-pricing of services to build reliance, a trick that has been part of digital products almost since their onset and very much in the playbook of the neighbourhood drug dealer.

One reason for listing the almost unbelievable facts and figures of AI growth in part 3 (§4,2) is to force us to face these choices explicitly.

The speed of change …

As is evident things are moving rapidly. This said although the details are changing, many of the large-scale impacts of AI on society and economics outlined in this series build on longer term trends in digital technology that have been evident at least since the turn of the Millennium.

With so many technologies in the past the societal impacts have only become apparent in hindsight.  With AI there are surprises, especially in terms of its spurts of almost unimaginably rapid progress, but also we are increasingly aware of the dangers and pitfalls.  The issues describe here are part of this conversation, aiming to ensure that we enter this exciting and dangerous time with eyes wide open.

Coming soon …

If you are interested in these issues, look out for the book AI or Social Justice, which Clara Crivellaro and I are currently working on.  The book website already includes a growing collection of resources including case studies and videos. .

References

[AISI26] AI Security Institute (2026) Our evaluation of Claude Mythos Preview’s cyber capabilities. AI Security Institute, Department of Science, Innovation and Technology. Apr 13, 2026. https://www.aisi.gov.uk/blog/our-evaluation-of-claude-mythos-previews-cyber-capabilities

[An26a]  Anthropic (2026).  Project Glasswing: Securing critical software for the AI era. Accessed 4th May 2026. https://www.anthropic.com/glasswing

[An26b]  Anthropic (2026).  Sabotage Risk Report: Claude Opus 4.6. Accessed 4th May 2026.   https://anthropic.com/claude-opus-4-6-risk-report

[BBC22]  BBC News (2022).  UK wardrobes stuffed with unworn clothes, study shows.  BBC News, 7 October 2022. https://www.bbc.co.uk/news/science-environment-63170952

[Br26]  Kari Briski (2026).  NVIDIA Launches Nemotron 3 Nano Omni Model, Unifying Vision, Audio and Language for up to 9x More Efficient AI Agents.  Nvidia blog.  April 28, 2026.  https://blogs.nvidia.com/blog/nemotron-3-nano-omni-multimodal-ai-agents/

[CC26]  Caiwei Chen (2026).  Three reasons why DeepSeek’s new model matters: The long-awaited V4 is more efficient and a win for Chinese chipmakers.  MIT Technology Review, April 24, 2026  https://www.technologyreview.com/2026/04/24/1136422/why-deepseeks-v4-matters/

[Ca26] Bryan Catanzaro (2026). NVIDIA Launches Open Models and Data to Accelerate AI Innovation Across Language, Biology and Robotics.  NVIDAI Blog, October 28, 2025.  https://blogs.nvidia.com/blog/open-models-data-ai/

[DS26]  DeepSeek-AI (2026).  DeepSeek-V4:Towards Highly Efficient Million-Token Context Intelligence.  Accessed 29th April 2026.  https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro/blob/main/DeepSeek_V4.pdf

[Fu26]  Maria Ines Fuenmayor (2026). The Privilege of Refusing AI. Codified, Medum,  Apr 22, 2026.  https://codifiedai.substack.com/p/the-privilege-of-refusing-ai

[Go26]  Gordon M. Goldstein (2026). Six Reasons Claude Mythos Is an Inflection Point for AI—and Global Security.  Council on Foreign Relations, April 15, 2026. https://www.cfr.org/articles/six-reasons-claude-mythos-is-an-inflection-point-for-ai-and-global-security

[HS26]  Kali Hays, Nardine Saad and Regan Morris, (2026).  Campaigners welcome Meta and YouTube’s defeat in landmark social media addiction trial.  BBC News, 25 March 2026.  https://www.bbc.co.uk/news/articles/c747x7gz249o

[KK25]  Kooli, Chokri, Youssef Kooli, and Eya Kooli (2025). Generative artificial intelligence addiction syndrome: A new behavioral disorder?.  Asian Journal of Psychiatry 107:104476.  https://doi.org/10.1016/j.ajp.2025.104476

[Ma24]  Murgia, Madhumita (2024). Code Dependent: How AI Is Changing Our Lives. Picador.

 

Facial recognition — what does accuracy mean?

A Guardian article at the weekend reported on the increasing number of people being ejected from stores after being misidentified by facial recognition systems as past shoplifters [Mu26].   This commercial use of facial regulation has even less oversight than police use, which has also been causing alarm. The people at the centre of the report were eventually offered gift vouchers by the shops concerned, but only after considerable personal embarrassment and lengthy and complex processes to clear their names (or to be precise faces).

According to the article Facewatch, the company providing the facial recognition service, claim a 99.98% accuracy rate.  This sounds high.  Does this mean that the cases reported are rare, albeit unfortunate, incidents?

Let’s unpack this a little.

According to the UK Office of National Statistics annual report on Crime in England and Wales, there are just over half a million cases of shoplifting a year  [ONS26]; the Facewatch web site offers a higher figure of 2 million across the whole UK, maybe attempting to take into account underreporting [FW26].  Let’s use this larger figure.

In the UK there are about 55 million adults, assuming on average of one shop visit per day, that is about 20 billion shopping visits per year.  So that means shoplifting accounts for just one visit in 10,000.1

So, if a facial recognition systems said no-one was a past shoplifter, it would attain 99.99% accuracy!2  If on the other hand the accuracy is equal for shoplifters and non-shoplifters (that is false positive and false negative rates are the same), then there would be one misidentified innocent for every correctly identified shoplifter — hardly rare.  If we use the ONS shoplifting figures, this rises to three misidentifications for each correct one.

One assumes that Facewatch adjusted the system recognition thresholds to have a lower false positive rate (wrongly accused) than this, instead accepting a greater proportion of missed true shoplifters, but in this case an overall 99.98% figure is unachievable.  Most likely the reported figure it is based on training data with, perhaps equal numbers of photos of shoplifters and non-shoplifters (essential to allow effective learning), so the 99.98% accuracy figure refers to this data not the numbers of each encountered in realistic (let alone real) use.

In both this case and others, such as rare disease diagnosis, seemingly high stated accuracy rates may not be as good as they at first seem, and certainly need a lot of context to be meaningful. As is clear this is by no means an abstract mathematical discussion, but one that affects real lives.  In the case of the use of facial recognition, the article also reminds us that these kinds of systems often have lower accuracy rates, and in particular higher false positive rates (that is wrongly accused) for black and asian people and for women in particular.

 

References

[FW26]   Facewath (2026).  Home page. Accessed 4th May 2026.  https://www.facewatch.co.uk

[Mu26]  Jessica Murray.  Guilty until proven innocent: shoppers falsely identified by facial recognition system struggle to clear their names.  The Guardian, 3 May 2026.  https://www.theguardian.com/technology/2026/may/03/guilty-until-proven-innocent-shoppers-falsely-identified-by-facial-recognition-struggle-to-clear-their-name

[ONS26]  Office of National Statistics (2026).  Crime in England and Wales: year ending December 2025.  ONS Centre for Crime and Justice, 23 April 2026.  https://www.ons.gov.uk/peoplepopulationandcommunity/crimeandjustice/bulletins/ crimeinenglandandwales/yearendingdecember2025

 

  1. It is really hard to keep track of these huge numbers.  I’m expert at it, but I initially made a small slip and was out by a factor of 20.[back]
  2. When I read accuracy figures in academic papers on machine learning, I often do the equivalent calculation for a trivial classifier … as in this case, it is often no worse than the algorithm.[back]

The Abomination of AI – part 7 – what can we do?

It all seems too big, requiring national and international responses.  But we can make a difference using appropriately chosen small AI (including none). Plus, this good use of AI is good for business too.

This is the seventh of a series of blogs based on my keynote “The abomination of AI” at ICoSCI 2026.  Each has an accompanying segment of the video and slides from the talk as well as detailed notes and references.  Section numbers refer to the full report which will be released in the final blog.   The slide thumbnails in the text correspond to the slides in the navigation panel below.  The presentation can be played below, or opened full screen. The full length video, complete slides and further information can be found at: https://alandix.com/academic/talks/ICOSCI-2026-abomination-of-AI/

Previously …

§1.  Every industry is driven by profits and power, but there is something about the nature of AI itself, which interacts with the nature of market forces in the world that is problematic and is different from other technologies.

§2.  Can any technology be neutral?  AI can be used for good purposes, such as advances in healthcare.  It can also have bad outcomes such as bias in the criminal justice system or online exploitative pornography.  Perhaps most often it is creating the frivolous or even ugly.

§3.  The obvious impact of AI is in the things it does directly. Some technologies also change the very nature of society, affecting even those who do not use them. Cars are an obvious example.  AI is also such a technology.

§4.  Doomsayers worry about the point when AI becomes sentient, outgrowing its creators.  The real danger is more insidious: the massive financial and human impacts of AI seem almost obscene.

§5.   Network externalities, the way one person’s use of AI and digital tech changes its value for others, creates positive feedback loops, leading to runaway growth and emergent monopolies, the nemesis of free markets. This the very nature of digital technology and AI breaks free markets leading to runaway inequality, even with the best intentions of industry … but some tech companies further exploit these effects.

§6.   Runaway growth of AI is not painless – opportunity costs of investment and human costs of lost jobs.  Gains may be transitory – buy-now-pay-later tech risk tying users into spiralling costs.

 

7.  What can we do?

These issues all seem too big, frighteningly so.   So what can you do?

You might be a policy maker, or on a government committee that’s advising governments.  If so, you might be in position to make changes at that scale.  Most do not have such high-level influence, but there are changes you can make within your own spheres to help ameliorate some of these potential dangers that are going on here.  I’ll focus on the UX designer or AI developer, but some of the ideas are ones you might be able to adopt in your own personal use or within an organisation.

 

7.1  No AI

One option is to simply, say “no” to AI.

If you are a designer, ask, “do I need AI at all in my project?”  Of course, everybody now expects every product to say ‘AI powered’, so you may not be able to avoid AI altogether, but it could be very simple AI.  But do ask whether you need it at all, if you don’t, why are you feeling you need to use it?

 

7.2  Small AI

If you do decide to use AI, you can opt for small AI.

If you are using language models or other generative AI, you might use smaller models, the kind that have been deliberately designed to be able to run on less powerful hardware. There are many good reasons to do this.  Indeed, Apple have been encouraging smaller AI because they want the AI to run on people’s personal devices, not just in the cloud.  This is because privacy is a strong part of the company brand.

Where it is appropriate you could use traditional AI, which is usually much smaller in terms of memory and computation.

Purely from a technical perspective, there are some really interesting research challenges in this area, both in terms of human computer interaction (see my 2024 talk on ‘Patient Interaction’ [Dx24]) and also pure technical AI.

Images: [Di22,Sa23,Dx25,DS25]

You’ll have seen some of the modifications of algorithms that are transforming this landscape including open LLaMa [ZD22,TH23], LORA [HS22] and LiGO [WP23].  DeepSeek [DS24,DS25] made waves when US export restrictions on NVIDIA chips forced Chinese innovators to adopt a far leaner and smarter approach to LLM development [LF24].  Debatably DeepSeek’s learning might have piggybacked off some of the other LLMs [We25b], but certainly at execution time it used far less resources than other LLMs at time. Now other LLMs have adopted lessons from DeepSeek, and all are looking to perform more efficiently, so there is small shift in thinking away from a simplistic ‘bigger is better’ approach [Hi20].

 

7.3  When to use AI

There is also a choice about when to use AI.

The most obvious use of AI is at execution time in a user interface or delivered application as part of the service provided.  This can of course be small AI or no AI at all even.

But you can also use AI at design time.  You might use big AI to create small AI for the delivered system, for example using techniques to compress the model.  You can also use AI as part of the UX process to critique a user interface, create rapid prototypes, or propose design ideas [Dx26b].  In addition, AI-based coding tools can create AI-free (or low-AI) systems.

Crucially, if you use big AI to help create a (smaller) product, it effectively gets reused again and again and again and again.   So it’s less expensive – both moneywise expensive to a company, but also less expensive in terms of its impact on the environment and society.

In fact, this is really powerful use of AI. For instance, one of the things I argue elsewhere is that AI critiques of UIs will be far better for accessibility than even the best designers.  This is in part because it is really hard for us to think about even obvious diversity such as what’s it like to be blind or deaf, or have a physical disability, an automated design tool can check a concept or prototype against vast numbers of different types of perceptual and physical abilities as well as combinations.  Even more important, it is almost impossible for us to imagine what it’s like to be somebody who thinks differently, for example somewhere distant from ourselves in a neurodivergent space .  I don’t think AI will be good at this, but I think it’ll be better than we are.

 

7.4  How to use AI

Finally, if you are using AI think carefully about the kind of AI you are going to use, and how to incorporate it into a system.  For many years I’ve talked about appropriate intelligence, most often in relation to AI error and the need to design human–AI systems that together are robust and effective, not focusing in the AI accuracy alone [DB00,BD23].  However, the same lesson can be applied more broadly.

Often, we think about human interaction with AI, but it can be useful to think of a three-way interaction with human(s) AI and plain-old computing – that is hand coded algorithms or classic AI. Now look at each kind of AI that you are thinking of using and ask what is it good for?

What kind of things do I mean? One of the problems with traditional AI is that it was good with hard-nosed rules, but much more problematic with fuzzy things.  There are various techniques such as Bayesian methods and fuzzy logic, but they require you to formalize the fuzziness into probabilities or similar functions.  Amongst other things this limited various forms of natural language understanding and common-sense reasoning

Of course large language models are really good at dealing with the nuances of language, but less so when one tries to get LLMs to be very precise, not least because they keep inventing stuff!

So as you design for AI, ask what is it good for, how can I use it most appropriately?

As an example of the appropriate use of AI,  my wife uses an app from “The Doctor’s Kitchen” (https://www.thedoctorskitchen.com) to help keep track of the health value of food.

You take a photo of a plate of food before you eat it and the app creates report on its nutritional value: how much fibre and protein it contains and its inflammation index.  Is it likely to be good for you or bad?

You could imagine doing this by writing a complicated prompt to an LLM or train a deep learning algorithm with lots of plates of food and hand-curated reports.  The app does not work like that.

What it does is to use image processing AI to analyse the plate and work out what food is on it.  Indeed, you can press an edit button to see what it thinks you’ve got on your plate, and, if it’s got it wrong, edit it.  One assumes that a log of these edits helps to further train the image processing AI.

So the AI has been used for the fuzzy part of the task, working out that there are crisps on the plate but a no cake. It even manages to recognise hummus and estimate how much.  It is amazingly good, but does sometimes get things wrong in terms of the volume or even what is there; however, when that happens you can easily see and correct it.

So this is using AI for the fuzzy bit.

This table of contents will then go into a standard algorithm that uses tables of nutritional values to tell how much protein is in, say, 10 grams of almonds, add this up for the plate and hence generate the final nutritional report.

AI and traditional computing together — combining the two using the best aspects of each.

Note that this is more explainable, you know, what’s going on.

It is also more flexible in terms of you can choose to enhance different components and change others.

There is also less vendor tie in.  This is not removed entirely as you need a new AI to be retrained.  However, it is easier to swap just the food recognition part than if the whole system were in a single AI.

This is good from a business point of view, but it also means you are using less large-scale AI with its environmental, financial and democratically damaging effects, when you could be using simpler computation.

Coming next …

Part 8 – summary and recap

This final post will recap what we’ve learnt about the runaway nature of the AI industry, how it undermines free markets, and how we can make a difference. The core question is not what can AI do, but what should AI do?

 

Update

Since the talk in January I read about A.T.L.A.S. (Adaptive Test-time Learning and Autonomous Specialization) an AI coding system built by a business student Johnathon Tigges wanting to challenge the assumption that “only the biggest players can build meaningful things” [Ti26] .  It is able to outcompete the big coding agnets by being clever – rather than just throwing a problem into a big code-optimised LLMs and asking for a solution, it uses AI to generate lots of potential code fragments and tests them, using the best to further refine the AI model … all on a consume GPU.  A lovely example of smart use of AI!  For a more detailed description see Sebastian Buzdugan’s Medium story about it [Bu26].

References

[BD23] Alba Bisante, Alan Dix, Emanuele Panizzi, and Stefano Zeppieri (2023). To err is AI.In Proceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter, pp.1–11. https://dsoi.org/10.1145/3605390.3605414

[Bu26] Sebastian Buzdugan (2026). Why a $500 GPU Can Beat Claude Sonnet on Coding Benchmarks. Medium. Mar 28, 2026. https://medium.com/@sebuzdugan/why-a-500-gpu-can-beat-claude-sonnet-on-coding-benchmarks-6c8169ffe4fe

[DS24]  DeepSeek-AI (2024).  DeepSeek-V3 Technical Report. arXiv preprint. 27 Dec 2024. https://arxiv.org/abs/2412.19437

[DS25]  DeepSeek-AI (2025).  DeepSeek-V3. GitHub Repository. Release v1.0.0. 27 Jun 2025. https://github.com/deepseek-ai/DeepSeek-V3

[Di22] Dickson, B. (2022). Can large language models be democratized? TechTalk,-May 16, 2022. https://bdtechtalks.com/2022/05/16/opt-175b-large-language-models/

[DB00] A. Dix, R. Beale and A. Wood (2000).  Architectures to make Simple Visualisations using Simple Systems.  Proceedings of Advanced Visual Interfaces – AVI2000, ACM Press, pp. 51-60.  https://www.alandix.com/academic/papers/avi2000/

[Dx24] Alan Dix (2024). Patient Interaction – for well-being, productivity and sustainability. FUSION 2024, Kuala Lumpur, Malaysia, 28 Sept. 2024. https://www.alandix.com/academic/talks/FUSION2024/

[Dx25]  Dix, A. (2025). Artificial Intelligence – Humans at the Heart of Algorithms, 2nd Edition, Chapman and Hall.  https://alandix.com/aibook/

[bibitelm name=Dx26b] A. Dix. (2026). AI for Human–Computer Interaction. CRC Press, in press. https://alandix.com/ai4hci/

[Hi20] Hinton, G. (2020). Extrapolating the spectacular performance of GPT3 into the future suggests that the answer to life, the universe and everything is just 4.398 trillion parameters. Twitter (now X), Jun 10, 2020. https://x.com/geoffreyhinton/status/1270814602931187715

[HS22]  Hu, E. J., Shen, Y., et al. (2022). LoRa: Low-rank adaptation of large language models. ICLR, 1(2), 3. https://arxiv.org/abs/2106.09685

[LF24] Liu, A., Feng, B., et al. (2024). Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437. https://arxiv.org/abs/2412.19437

[Sa23] Sajid, H. (2023).  Artificial Intelligence: Can You Build Large Language Models Like ChatGPT At Half Cost? Unite.ai, May 11, 2023.  https://www.unite.ai/can-you-build-large-language-models-like-chatgpt-at-half-cost/

[Ti26]  Johnathon Tigges (2026).  A.T.L.A.S. – Adaptive Test-time Learning and Autonomous Specialization. GitHub. https://github.com/itigges22/ATLAS

[TH23] Touvron, H., Martin, L., et al. (2023). Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. https://arxiv.org/abs/2307.09288

[WP23] Wang, P., Panda, R., et al. (2023). Learning to grow pretrained models for efficient transformer training. arXiv preprint.  https://arxiv.org/abs/2303.00980

[We25b] Werner, J. (2025). Did DeepSeek Copy Off Of OpenAI? And What Is Distillation? Forbes, Jan 30, 2025. https://www.forbes.com/sites/johnwerner/2025/01/30/did-deepseek-copy-off-of-openai-and-what-is-distillation/

[ZD22]  Zhang, S., Diab, M. and Zettlemoyer, L. (2022). Democratizing access to large-scale language models with OPT-175B. Meta Research Blog, May 3, 2022. https://ai.meta.com/blog/democratizing-access-to-large-scale-language-models-with-opt-175b/

 

 

 

The Abomination of AI – part 6 – should we worry?

Runaway growth of AI is not painless – opportunity costs of investment and human costs of lost jobs.  Gains may be transitory – buy-now-pay-later tech risk tying users into spiralling costs.

This is the sixth of a series of blogs based on my keynote “The abomination of AI” at ICoSCI 2026.  Each has an accompanying segment of the video and slides from the talk as well as detailed notes and references.  Section numbers refer to the full report which will be released in the final blog.   The slide thumbnails in the text correspond to the slides in the navigation panel below.  The presentation can be played below, or opened full screen. The full length video, complete slides and further information can be found at: https://alandix.com/academic/talks/ICOSCI-2026-abomination-of-AI/

Previously …

§1.  Every industry is driven by profits and power, but there is something about the nature of AI itself, which interacts with the nature of market forces in the world that is problematic and is different from other technologies.

§2.  Can any technology be neutral?  AI can be used for good purposes, such as advances in healthcare.  It can also have bad outcomes such as bias in the criminal justice system or online exploitative pornography.  Perhaps most often it is creating the frivolous or even ugly.

§3.  The obvious impact of AI is in the things it does directly. Some technologies also change the very nature of society, affecting even those who do not use them. Cars are an obvious example.  AI is also such a technology.

§4.  Doomsayers worry about the point when AI becomes sentient, outgrowing its creators.  The real danger is more insidious: the massive financial and human impacts of AI seem almost obscene.

§5 .   Network externalities, the way one person’s use of AI and digital tech changes its value for others, creates positive feedback loops, leading to runaway growth and emergent monopolies, the nemesis of free markets. This the very nature of digital technology and AI breaks free markets leading to runaway inequality, even with the best intentions of industry … but some tech companies further exploit these effects.

6.  Should we worry?

6.1  Jobs and power

Image: Scottish Government, CC BY 2.0. https://commons.wikimedia.org/wiki/File:One_of_the_typing_pools_%283829002585%29.jpg

Does this matter?  So what if a small number of companies have notional multi-trillion balance sheets and are engaged in runaway development in the digital realm, so long as it doesn’t affect the real world.  But, of course, it does; the digital domain is leaking into the physical domain.

Of course, technology and automation have long had massive effect, with a gradual shift from human expertise to financial capital.  This certainly dates back to the 19th century or late 18th century with the rise of the industrial revolution.  Of course, at that time, humans were still needed, but they went from being the experts, people who were doing weaving and spinning, to the people (including young children) who tended the machines, monitoring and knotting broken threads, and occasionally losing arms in the moving parts.  So it wasn’t, that the humans were unnecessary, but they merely fed the machines.

Moving into the 20th century, machines replaced humans more completely with fully automated production lines and industrial robots, although of course still with humans cleaning up between them.  In many parts of the global north skilled manual work has all but disappeared, with a combination of automation and out-sourcing.

To some extent the impact of automation initially hit traditional male jobs, but in the latter half of 20th century, from about the early seventies on this also hit clerical roles.  Until then every big organisation would have had a typing pool.  My own mother was for many years a typist at first in the War Department throughout the Second World War, and then the Inland Revenue. These typing pools, consisted of ranks of people, usually women, typing sometimes from dictation and shorthand, and sometimes other forms of handwriting.  Word processors basically destroyed the typing pool.  Managers rather than dictating to somebody which then got typed would do the typing themselves directly into a word processor … and of course that now means 90% Microsoft Word — another emergent monopoly.

So, in general, skilled working class jobs have been destroyed by automation leaving a growing underclass with minimum wage jobs and gig work.

What we’re seeing now is that the mid-range intellectual work is starting to be eaten by AI [BSI25,].

You may have seen the MIT report that reported that while many companies were investing heavily in AI, around 95% of the projects were considered to be failing or underperforming [CP25].  So this is not yet universal, but in some areas such as computing many of the lower range of the roles, typical graduate first jobs, are being replaced by AI.  Until recently the expert developer, would have several junior developers who can do the grunt work; now this is done by AI.  Similar pictures are emerging in advertising, aspects of finance, and some of the large management consultancies [Ko25,Sw26,IPA26,KM26,Pa26].  In the UK, and even more so in other parts of the world, there are strong pushes to use AI much more within government, not least on the assumption that it will improve efficiency [GUK25,Dx26].

There’s a critical issue about who’s in control.  Think about the road network.  In the UK there are some private roads and also some toll roads, but the majority of roads, including almost all in urban areas are owned by the local authority or central government.  That is, the vast majority of the road network is local in terms of its maintenance and control.  Imagine if the road network was instead owned by two or three major companies based in the west coast of America.  Imagine if every road in Malaysia, every road in Indonesia, as well as every road in UK was owned by two or three companies there.  So if there’s a pothole in the road, it’s those companies to who you have to complain.  Perhaps they decide to charge you to use the road outside your house, or decide to remove the roads entirely if they’re in dispute with you or your government.

That’s exactly the direction we are moving with AI and public services.  Even assuming the best intention of the big AI players, this does feel worrying.  And of course this isn’t a choice you can make or not.  Just like the roads, once AI is embedded into public service everything orients around it.

Returning to the changes in employment, once we lose the entry stage jobs, there’s a clear problem for the people who would’ve had them.  All the graduates from our universities who would’ve been going into those jobs, are being hit, and in the UK and some other parts of the year, on top of large student loans [DoE25,Pa25,Pa26].  This is creating a class of people who are underemployed, inexperienced, and quite likely disaffected with society.  Think of this in the light of the rise of extremism across the world.  Often this is dismissed as a problem of the uneducated, but here we are adding a vast number of highly educated people, who are disaffected in society, further spreading those extreme messages.

 

6.2  Locked into AI

This is also a problem within an organization.  If you are not em[loying those early career people, what happens in five or ten years’ time as your more experienced employees want to move up the organization?  How do you fill in those gaps if you haven’t been training people?

This might be something we need to address as universities in training people effectively to higher and higher levels so that they can jump in at that point.

Or the organisation can simply find they need more AI – what they certainly can’t do is just turn off the AI because they haven’t got the people with the experience in order to do the jobs anymore.  They have become locked in as a company to the use of AI.

This is also true of data.  Microsoft have a guide entitled, “Prepare your data for AI” [Ms26].  The use of AI is not coming for free, but needs a rearrangement of data for it.  One does wonder if the same effort in making data ready for AI could be better spent making it ready for simpler statistical algorithms.

However, let’s assume you have put effort into reorienting your whole data around AI. Your systems rapidly become AI dependent – your recent information and new data has become deeply embedded into the AI itself in ways that are often opaque.

Once you have bought into an AI system, you can’t just say, “well, let’s just swap to something else”.  It’s difficult even to swap vendors once it is that embedded.

 

6.3  Buy now … pay later

If you have a loan with interest, you know you have to pay for it eventually, but things can be less obvious.  When I was little, my mum had a Kays catalogue, a sort of the 1960s  equivalent of internet selling [WA17]  .  Its pages were full of big colour pictures of clothes, white goods, toys, etc. …it was usually the toys I was looking at.  You could buy things from the catalogue and could pay over 20 weeks with no interest, but of course the things cost more than if you had the ready cash buy them at a shop.  So effectively you were paying extra.

AI currently is in that ‘buy now pay later’ mode, both globally and locally for individuals.  AI growth is funded by massive investment (as we discussed absolutely huge) possibly more than ever before except perhaps for the South Sea Bubble.  However, the income doesn’t in any way cover the costs, and the ratio between the expected income and the investment is way out of kilter of what you’d expect even for a digital company, let alone for a physical one.

So how do the books add up?

If you’re an accountant in the company or if you’re an investment manager, what are you thinking about as, as you see these figures?  Why don’t you sound the alarm?  The reason is you are thinking that in the future you will have more money from that stream.  In early digital companies, like Amazon, you did that because you assumed you were going have a bigger market, the number of people who would use it would grow.

But AI already has lots of users, so instead  people you have two options.  The first is to find ways to make what you produce more cheaply, which is happening to some extent already However, you don’t want it to get too cheap otherwise competitors can enter the market.  The alternative, and your only real option, to recoup your investment by charging more or getting the same customers to use more. Either way, it is the customer who pays in the end!

This is no secret.  Fortune magazine talking about OpenAI said that it’s business plan relies on “what amounts to a bet on dominance” [Sm25].  That is, in putting in all that investment, what investors are hoping is that the company will become the AI company in an area that everybody is tied into.  And then of course they can charge pretty much what they like: a buy now – pay later world. We’re using AI now, but the cost is going to come later on.

 

Coming next …

Part 7 – what can we do?

It all seems too big, requiring national and international responses.  But we can make a difference using appropriately chosen small AI (including none). Plus, this good use of AI is good for business too.

 

Update.

Since the talk, I read about a woman who had developed a close relationship with a chatbot hosted on a version of ChatGPT that is due to be retired [He26]. While she could probably export her chat history and use that to reinitialise the new version of the software, it would not be the same.  We will soon start to hear similar stories for business and public systems as tech companies have not had a good record of backward compatibility, and this is all but impossible with current LLMs.

Also, in late January, OpenClaw was released [OC26].  This highlighted the way current payment models do not reflect the actual cost of use. OpenClaw (originally called Clawdbot) is an open-source GitHub project that used the Claude API to create an automated assistant coordinating web and desktop resources.  Within days of the launch Anthropic enforced a long-standing, but unenforced, restriction on third-party use of its API and blocked OpenClaw for most user accounts including its $200 Max account.  This was because these accounts come with monthly usage limits, and OpenClawd encouraged full use of those limits.  However, the business model of even premium accounts depends on users NOT using their monthly allowances. OpenClawd encouraged full use of those limits, thus exposing.the true cost of the full use vastly exceeded the subscriptions [Ba26] .

 

 

References

[Ba26] Novy Baf (2026).  Anthropic Pushed Its Most Loyal Developers Straight Into OpenAI’s arms. OpenAI Didn’t Even Have to Ask.  The Nov TEch, 2nd Mar 2026.  https://www.thenovtech.com/p/anthropic-pushed-its-most-loyal-developers

[BSI25] British Standards Institution (2025). Evolving Together: AI, automation and  building the skilled  workforce of the future.  https://www.bsigroup.com/en-GB/insights-and-media/insights/whitepapers/evolving-together-flourishing-in-the-ai-workforce/

[Dx26] A. Dix. (2026). Beyond the Algorithm: Designing Human-Centric Public Service with AI. Talk at Service Design for Public Sector Spotlight Seminar series of challenges and opportunities between Design Cultures and Public Sector, Sapienza, University of Rome + Online, 4th February 2026. https://alandix.com/academic/talks/Rome-Seminar-Feb-2026/

[DoE25] Department of Education (2025). The impact of AI on UK jobs and training. November 2023.  https://www.gov.uk/government/publications/the-impact-of-ai-on-uk-jobs-and-training

[CP25]  Aditya Challapally, Chris Pease, Ramesh Raskar, Pradyumna Chari (2025). The GenAI Divide: State of AI in Business 2025. MIT NANDA, July 2025. https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf

[GUK25] Gov.UK (2025). AI to power national renewal as government announces billions of additional investment and new plans to boost UK businesses, jobs and innovation. Press release from Department for Science, Innovation and Technology, HM Treasury, Wales Office, The Rt Hon Liz Kendall MP, The Rt Hon Rachel Reeves MP and The Rt Hon Jo Stevens MP.  20 November 2025. https://www.gov.uk/government/news/ai-to-power-national-renewal-as-government-announces-billions-of-additional-investment-and-new-plans-to-boost-uk-businesses-jobs-and-innovation

[He26]  Stephanie Hegarty (2026). Rae fell for a chatbot called Barry, but their love might die when ChatGPT-4o is switched off. BBC News, 14 February 2026. https://www.bbc.co.uk/news/articles/crl43dxwwy9o

[IPA26] IPA (2026). IPA Agency Census 2025 shows workforce declines while diversity improves.  Institute of Practitioners in Advertising. 11 February 2026. https://ipa.co.uk/news/agency-census-2025/

[KM26] Lucy Knight and Sumaiya Motara (2026). The big AI job swap: why white-collar workers are ditching their careers. The Guardian,  11 Feb 2026. https://www.theguardian.com/technology/2026/feb/11/big-ai-job-swap-white-collar-workers-ditching-their-careers

[Ko25] Saskia Koopman (2025).  Big Four slash graduate jobs as AI takes on entry level work. City AM, 23 June 2025. https://www.cityam.com/big-four-slash-graduate-jobs-as-ai-takes-on-entry-level-work/

[Ms26] Microsoft (2026). Prepare your data for AI. Dated 20/1/2026.  https://learn.microsoft.com/en-gb/power-bi/create-reports/copilot-prepare-data-ai

[OC26] OpenClaw (2026).  OpenClaw — Personal AI Assistant. https://github.com/openclaw/openclaw

[Pa25] Joanna Partridge (2025). Gen Z faces ‘job-pocalypse’ as global firms prioritise AI over new hires, report says. The Guardian,  9 Oct 2025. https://www.theguardian.com/money/2025/oct/09/gen-z-face-job-pocalypse-as-global-firms-prioritise-ai-over-new-hires-report-says

[Pa26] Joanna Partridge (2026). More than a quarter of Britons say they fear losing jobs to AI in next five years. The Guardian,  25 Jan 2026. https://www.theguardian.com/business/2026/jan/25/more-than-quarter-britons-fear-losing-jobs-ai-next-five-years

[Sm25]  Dave Smith (2025). OpenAI says it plans to report stunning annual losses through 2028—and then turn wildly profitable just two years later . Fortune, November 12, 2025. https://fortune.com/2025/11/12/openai-cash-burn-rate-annual-losses-2028-profitable-2030-financial-documents/

[Sw26] Mark Sweney (2026). UK ad agencies undergo their biggest exodus of staff as AI threatens industry. The Guardian,  13 Feb 2026. https://www.theguardian.com/media/2026/feb/13/uk-ad-agencies-biggest-annual-exodus-of-staff-ai-threatens-industry

[WA17]  Worcestershire Archive and Archaeology Service (2017).  Christmas and Kays.  Explore the Past. 19th December 2017. https://www.explorethepast.co.uk/2017/12/christmas-and-kays/

 

 

 

 

The Abomination of AI – part 5 – digital and AI breaks market economics

The very nature of digital technology and AI breaks free markets leading to runaway inequality, even with the best intentions of industry … but some tech companies further exploit these effects.

This is the fifth of a series of blogs based on my keynote “The abomination of AI” at ICoSCI 2026.  Each has an accompanying segment of the video and slides from the talk as well as detailed notes and references.  Section numbers refer to the full report which will be released in the final blog.   The slide thumbnails in the text correspond to the slides in the navigation panel below.  The presentation can be played below, or opened full screen. The full length video, complete slides and further information can be found at: https://alandix.com/academic/talks/ICOSCI-2026-abomination-of-AI/

Previously …

§1.  Every industry is driven by profits and power, but there is something about the nature of AI itself, which interacts with the nature of market forces in the world that is problematic and is different from other technologies.

§2.  Can any technology be neutral?  AI can be used for good purposes, such as advances in healthcare.  It can also have bad outcomes such as bias in the criminal justice system or online exploitative pornography.  Perhaps most often it is creating the frivolous or even ugly.

§3.  The obvious impact of AI is in the things it does directly. Some technologies also change the very nature of society, affecting even those who do not use them. Cars are an obvious example.  AI is also such a technology.

§4.  Doomsayers worry about the point when AI becomes sentient, outgrowing its creators.  The real danger is more insidious: the massive financial and human impacts of AI seem almost obscene.

§5.1.  §5.2.   Network externalities, the way one person’s use of AI and digital tech changes its value for others, creates positive feedback loops, leading to runaway growth and emergent monopolies, the nemesis of free markets.

5.3  Digital and AI breaks market economics

So digital technology breaks market economics.  Yet this is what our whole world is built on.  Even countries that are not fully market economies, such as China, often rely upon market economics extensively, both internally and globally.  Indeed, market economics has driven so nearly all of the late 20th century trade and much before that, including the industrial revolution.  Now market economics has not been good for everything and certainly not for everybody, but it’s had elements of success.  And now it is broken.

Digital technology breaks market economics and AI makes it worse.

One of the ways that AI makes this worse is that the new AI, large language models and the like, are built on big data and big computation.  This means that they require big business … really big business, a business that’s bigger than most countries, in order to get in the game.

But once you’re in that game, then you’ve got a large volume of people using your systems, generating more data, perhaps the, the power to, to leverage and try and encourage other people to give you data.  For example, this can include governments giving certain companies, sometimes exclusive, access to public health data.  And, of course, this then means the successful companies have the money to invest in more data centres to process that data.

Here we have yet another positive feedback loop exacerbated by the huge computational and data needs of AI.

And of course that has effects, it has environmental impact as seen in the data about energy and water use.

But also, because the companies have to be so big, you end up with a potential democratic deficit.  This was very evident in America during Trump’s inauguration with the ‘tech bros’ surrounding him.  Although there have been some fallouts between some of them since, the power of big business was very evident.  And that’s even in the US, smaller countries really have to struggle because the businesses are bigger than they are.

 

5.4  With the best will in the world …

So digital, by its very nature, leads to runaway inequality, which AI intensifies.  You have to work hard to stop that happening.

This doesn’t mean you can’t.  As we discussed, in our body’s immune system, we have positive feedback loops that are important to fight infection.  These would lead to autoimmune diseases if unchecked, but they are modified by negative feedback loops that control them.  Similarly, the macro-economic feedback loops of digital technology and AI are not unstoppable, but the natural progression is just for them to keep on going.

Now this potentially runaway growth of AI happens even if everybody plays nice.  It is not about evil owners of AI companies who are trying to control the world.  With the best will in the world, this will happen.

But, of course, they don’t always have the best will in the world.

Some of the problem is baked into our commercial legal systems.  In the UK, if you are on the board of directors of company, your legal responsibility is to your shareholders, which typically means profit maximisation.  So even if you might have liked to do something better for society or the world, you are legally bound to do the thing that maximises profits.

So, the leaders of big AI are almost forced not to do the right thing, but it varies on the individuals how much they lean into that.

 

5.5  … Or not

Facebook internal strategy document quoted by Cory Doctorow  [D025]

In 2025 Meta, the owner of Facebook, was in the midst of an anti-trust case in the US regarding their takeover of Instagram in the early 2010s [Da25].  The US Government eventually lost their case against Meta, due largely to the emergence of TikTok as a competitor in the meantime  However, as part of the case various internal Facebook documents came into the public domain.  Cory Doctorow, the open software campaigner, quotes from one internal strategy document, which showed that Mark Zuckerberg and Facebook understood precisely the role of emergent digital monopolies:

“Social networks have two stable equilibria: either everyone uses them, or no-one uses them.” [Do25]

“… The binary nature of social networks implies that there should exist a tipping point, ie some critical mass of adoption, above which a network will organically grow, and below which it will shrink.”

Other emails show that this understanding did lead to very deliberate attempts stifle Instagram’s growth [Da25].  That is Facebook was very aware of network effects and the presence of tipping points, and prepared to use techniques to ensure that they are the side of that critical mass that they wanted to be.

These statements were made in a largely pre-AI context (at least in the way it is understood today), with regard to the role of emergent monopolies for social media, but of course intensified by AI.  I’m sure Meta was not and is not alone in being aware of these effects and being prepared  use them.

Coming next …

Part 6 – should we worry?

Runaway growth of AI is not painless – opportunity costs of investment and human costs of lost jobs.  Gains may be transitory – buy-now-pay-later tech risk tying users into spiralling costs.

.

 

References

[Da25] David Dayen (2025). The Government Has Already Won the Meta Case. The American Prospect, April 16, 2025. https://prospect.org/2025/04/16/2025-04-16-government-already-won-meta-case-tiktok-ftc-zuckerberg/

[Do25] Cory Doctorow (2025).Mark Zuckerberg personally lost the Facebook antitrust case. Pluralistic. Apr 18, 2025. https://pluralistic.net/2025/04/18/chatty-zucky/

 

The Abomination of AI – part 4 – why is this happening?

Network externalities, the way one person’s use of AI and digital tech changes its value for others, creates positive feedback loops, leading to runaway growth and emergent monopolies, the nemesis of free markets.

This is the fourth of a series of blogs based on my keynote “The abomination of AI” at ICoSCI 2026.  Each has an accompanying segment of the video and slides from the talk as well as detailed notes and references.  Section numbers refer to the full report which will be released in the final blog.   The slide thumbnails in the text correspond to the slides in the navigation panel below.  The presentation can be played below, or opened full screen. The full length video, complete slides and further information can be found at: https://alandix.com/academic/talks/ICOSCI-2026-abomination-of-AI/

Previously …

§1.  Every industry is driven by profits and power, but there is something about the nature of AI itself, which interacts with the nature of market forces in the world that is problematic and is different from other technologies.

§2.  Can any technology be neutral?  AI can be used for good purposes, such as advances in healthcare.  It can also have bad outcomes such as bias in the criminal justice system or online exploitative pornography.  Perhaps most often it is creating the frivolous or even ugly.

§3.  The obvious impact of AI is in the things it does directly. Some technologies also change the very nature of society, affecting even those who do not use them. Cars are an obvious example.  AI is also such a technology.

§4.  Doomsayers worry about the point when AI becomes sentient, outgrowing its creators.  The real danger is more insidious: the massive financial and human impacts of AI seem almost obscene.

5  Why is this happening?

Why is this happening?  Well, we know the world is unequal, we know that the way free markets work mean that big companies often get economies of scale and get larger.  Is it just a natural thing that the same is happening with AI?

The answer is ‘no’, this is clear from the way AI stocks have performed in ways unlike any previous (legitimate) business.  There are elements of the normal operation of markets, but there are particular properties of digital technology in general and AI in particular that break aspects of market economics and lead to emergent monopolies.

These are due to positive feedback loops.  If you are from an engineering background you’ll know about these, but for those who aren’t we’ll take a little segue to look at positive feedback loops in general and then come back to how that applies in the economic sense.

 

5.1  Understanding feedback loops

Image: By Charles Schmitt – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=44338386

Feedback loops are everywhere.  The term simply means a process where the output in some way influences the future input.

One type is called a negative feedback loop, where a change in the input creates an effect that counters the change.  This can be engineered.  The classic example of this is the universal controller for a steam engine which keeps the engine running at a set speed. It consists of a set of steel balls on arms that spin as the engine spins.  The spinning of the arms means the steel balls rise due to centrifugal force, which then opens a valve reducing the pressure of the steam and hence the speed of the engine.  If the engine turns too slowly the balls fall, shutting the valve, increasing the steam pressure and hence the speed of the engine. Notice that this negative feedback effect leads to stability and balance,

In geometric shapes, you often see smoothness when there are negative feedback effects.  A water drop is a smooth sphere because any small disturbance on the surface tends to get counteracted by the surface tension. So any little dents fill back in again very rapidly.  Again the negative feedback loop creates a stable balance.

Positive feedback effects are when the output that is produced reinforces the original change. Think about a microphone being put near a speaker and the screech you get – that is a classic positive feedback effect – instability and extremes.  In physical structures  positive feedback effects often lead to sharp edges, like a snowflake. As the snowflake forms any sharp point attracts more ice formation and therefore grows.

Positive feedback often leads to tipping points where you get sudden changes and hysteresis where you have changes, which are hard to reverse.  Many climate change issues are of this kind.

This sounds as though positive feedback is a bad idea, but positive feedback can be really powerful.  Snowflakes are beautiful and they happen because of this!  In our bodies our immune system has some positive feedback cycles so that our bodies can react very rapidly.  Positive feedback often leads to exponential growth, and here the immune system can ramp up very quickly to fight infections.  However, useful positive feedback is usually wrapped around with controls that create a negative feedback, which stops them getting too extreme.

So it’s not the positive feedbacks are bad per se and negative ones good.  However, it often feels as though they should be labelled the other way round, as positive feedback on its own tends to have these runaway effects and nobody wants a screeching microphone!

 

5.2  Network effects / externalities

Image: https://en.m.wikipedia.org/wiki/File:Microsoft_Office_Word_%282019%E2%80%93present%29.svg

Human society has many networks, some mediated by technology, some by our normal human relationships, such as networks of people that know one another, or business contacts with each other.  Some of these are within a single group, some are more structured, such as the way teachers are connected with the children they teach, who in turn have parents, who may themselves know each other or talk to teachers at parents’ evenings.

Crucially though, these human social networks change the value of digital goods.  To be precise they can change the value of other kinds of goods as well, but particularly digital ones.

If your colleagues all use Microsoft Word, then it makes more sense that you use Microsoft Word rather than, say Apple Pages.  I use PowerPoint for presentations largely because I often want to share slides with other people, even though I work on a Mac and Keynote might be better for some effects.

These are positive feedback cycles.  If I use something, it makes it of more value for you to use the same thing.  If you use it, it makes it of more value for me to use it.  Like all positive feedback, this leads to runaway effects.

Image: By Calistemon – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=127909261

Now for a little bit of economics.  Market economics assumes that markets are open; that is, it is possible for new businesses to start in an area and compete with existing ones, often leading to more efficient production.  The arguments for why market economics work (to the extent they do, and there’s limits to that) are predicated on this openness.  So, when monopolies happen, there are problems.

You can get natural monopolies where there’s a single resource that is rare and only one or a small number of people control.  Just as many of the rare earths are found in China.  This is a natural phenomenon and can cause problems, hence worries about finding alternative sources or alternative materials.

Sometimes monopolies can be engineered when a group of people in a sector come together to agree to keep the price high, or restrict outputs.  Most countries have antitrust laws or anti-monopoly laws, which try to ban this behaviour so that new players can come into a market and it doesn’t get controlled.

The trouble with network effects is that the positive feedback leads to a winner takes all situation.  The issue first hit the headlines back in 2001 concerning Microsoft’s bundling of Internet Explorer [LM01], but applies to much other software.  So it’s very hard to have even have two successful software products in an area, say Keynote and PowerPoint, let alone lots of different presentation software, because if one person uses it then it changes its value for everybody else.  This is an emergent monopoly.

Note, this is not because the manufacturers get together and to something underhand.  It is just a natural impact of digital technology, which you have to work hard to avoid.  There are ways of doing this: you can ensure open standards, for example; the fact that PPTX format is an open format, means it’s possible for other products to use it and interoperate with PowerPoint.

So there are ways you can counter the worst effects, but the natural impact is often for digital goods to give rise to these emergent monopolies.

Coming next …

Part 5 – digital and AI breaks market economics

The very nature of digital technology and AI breaks free markets leading to runaway inequality, even with the best intentions of industry … but some tech companies further exploit these effects.

.

 

References

[LM01] Liebowitz, S., and Margolis, S. (2001). Network effects and the Microsoft case. Chapter 6 in Dynamic competition and public policy: Technology, innovation, and antitrust issues, J. Ellig (ed.), pp.160–192. https://personal.utdallas.edu/~liebowit/netwext/ellig%20paper/ellig.htm

 

The Abomination of AI – part 3 – a different kind of apocalypse

Doomsayers worry about the point when AI becomes sentient, outgrowing its creators.  The real danger is more insidious: the massive financial and human impacts of AI seem almost obscene.

This is the third of a series of blogs based on my keynote “The abomination of AI” at ICoSCI 2026.  Each has an accompanying segment of the video and slides from the talk as well as detailed notes and references.  Section numbers refer to the full report which will be released in the final blog.   The slide thumbnails in the text correspond to the slides in the navigation panel below.  The presentation can be played below, or opened full screen. The full length video, complete slides and further information can be found at: https://alandix.com/academic/talks/ICOSCI-2026-abomination-of-AI/

Previously …

§1.  Every industry is driven by profits and power, but there is something about the nature of AI itself, which interacts with the nature of market forces in the world that is problematic and is different from other technologies.

§2.  Can any technology be neutral?  AI can be used for good purposes, such as advances in healthcare.  It can also have bad outcomes such as bias in the criminal justice system or online exploitative pornography.  Perhaps most often it is creating the frivolous or even ugly.

§3.  The obvious impact of AI is in the things it does directly. Some technologies also change the very nature of society, affecting even those who do not use them. Cars are an obvious example.  AI is also such a technology.

4.  A different kind of apocalypse

Image: [Do26]

The term ‘abomination’ conjures up apocalyptic images – something that defiles the sacred, often so malign and powerful that it either destroys the world itself, or through its influence drives others to mutual despoliation or annihilation.

Now I’m talking about this regarding the nature of AI, how AI changes society, so it is indeed a bit apocalyptic!

There are different kinds of these apocalyptic views regarding AI.  A global war machine let loose on humanity envisaged in Terminator, was distant science fiction when the films were first released, but sound prescient as the war in Ukraine is fought by drones hunting humans and, in the Gaza and succeeding conflicts, Israel’s military decisions have been increasingly taken by AI [Be23,DM23].  While a Terminator-style takeover still feels pretty distant, an accidental conflagration much less so.

Many fears centre around  the singularity – the point at which AI becomes capable of designing itself leading to runaway developments of which we have no control. Related to this is the point at which AI becomes self-aware and maybe decides that humans are rivals to be squashed, or maybe simply pushed aside as irrelevant.  An ex-OpenAI expert Daniel Kokotajlo recently announced that true AGI (artificial general intelligence) was not as imminent as first envisaged, and gave the world a reprieve until 2034 [Do26] – well we can all heave a sigh of relief.

While this form of disaster scenario should not be ignored entirely, there are more immediate worries.  Without being sentient or omnipotent AI is transforming the world.

 

4.1  The end comes quietly,

Disaster scenarios make good Hollywood movies, but often the end comes quietly.  In the past some empires and civilisations have collapsed entirely, but more often there is a slow decay, a series of more minor crises and a gradual withering from within.

It is this more insidious impact of AI that concerns me.

 

4.2  Facts and figures

Let’s consider some facts and figures about AI.  Some involve estimates that have varying levels of  confidence, but altogether paint a picture.

First is the announcement that Tesla had negotiated a $1 trillion salary settlement with Elon Musk [JM25].  This is a 10 year deal, and a lot of it is in stocks and shares, so you could argue whether it’s real money or not, but it is still substantial.  Or rather not just substantial, but enormous.  This is a trillion dollars, not a million, nor even a billion, but a million million.  A trillion dollars is $3,000 for each man, woman, and child in the US or, over 10 years, about $300 per year.

I first studied economics in the late 1970s.  All societies are unequal, and there is a well-known rule that the high-end tail of incomes in western countries follow an approximate 1/XK rule (with K~2), where the number of earners for a particular income is inversely proportional to the square of the income, or smaller [Mi78,].  This means that there are few people with vast amounts and lots of people with much less. But the people with huge amounts are few enough that they didn’t make a huge difference to the overall picture.  If the income of the rich were to have been spread over all of society it made almost no difference.  Overall the volume of money was in the middle income range.

This has important implications.  Market economies orient themselves to make the most efficient use of resources where the most money is, that is the middle income range.  Now that’s bad news if you’re rich, because your money gets used less efficiently – each dollar doesn’t buy as much as it might, but you’re rich enough anyway.  This is more of a problem if you’re really poor, as goods for the poorest are not optimised to the same extent as for the middle.

The middle income area has also driven taxation policy.  In the past if you placed a large tax on the richest, it might make people feel it was fairer, but had a relatively small impact on total taxes gathered as the volume of money was still in the middle income ranges.

This rule held throughout the latter half of the 20th century, but has changed.  We are witnessing a level of inequality here that hasn’t been seen probably for hundreds of years, possibly thousands, maybe even since the age of the ancient empires. This is quite surprising to say the least.

In the UK, a recent report that said that, while less than 10% of energy was currently being used in data centres, this is due to rise by 600% (six times greater) by 2050 [Cr25,LA25].  That’s a lot, even if you take into account changes in other forms of energy use – a big percentage of UK energy use is going to be in data centres [VG26]. In Australia electricity use in data centres is projected to exceed use by electric cars by 2030 [ST25].

Another recent report said there was expected to be $6.7 trillion investment in data centres globally in the next five years [NG25].  That’s about $1.3 trillion a year.  At nearly the same time, at COP 25, they were trying to get countries to agree to a $300 billion (not trillion, billion) budget to help the countries worst hit by climate change; places such as the island states that will be inundated, and Bangladesh where a large  proportion of the populated areas is in the estuary and delta of the Ganges.  The current target is $300 billion, but they are struggling to get even $30 billion of commitments from rich countries [UN25].  Further more they believe that the actual figure needed is more than three times the current target of $300 billion, which would still be less than a single year of investment in data centres.

In the S&P 500, one of the major stock market indices, 34% of the share value is in about 10 high tech companies [Fo25].  The whole point of these indices is that they should be spread over large numbers of industries to give an overall sense of the financial state and there has never before been such a concentration in a small number of companies.  This concentration of capital has led to fears about instability in the stock market.

In general, the level of global investment in AI is huge.  Some of this is ‘funny money’, where one AI or tech company invests in another, but a lot is real money – indeed, the OECD reported that 61% of all venture capital investment in 2025 was in AI [OECD26].   Crucially,  the real money going into AI is not being invested elsewhere.  That is, there is an opportunity cost, because of the bubble-like draw of AI investment, there is underinvestment elsewhere in industry and global economy.

In addition there are issues of energy and water use, data colonialism, and more [OC25,Ma24].   In the UK, Kier Starmer, the prime minister, made one of the major goals of this five year parliament to build 1.5 million new homes.  This is because Britain has a housing crisis with far more people needing accommodation than homes being built; this puts costs up for everyone and increases homelessness.  The government will to struggle to meet its house building target anyway, but it was recently reported that housing schemes are having to be put on hold because data centres are using up so much electricity that there isn’t enough left for additional housing development [Cr25].

 

4.3  The obscenity of AI

These figures are not just surprising, nor even shocking, but obscene. I use that word, not in the sense of pornographic material, but of something that is so bad it makes you almost feel sick to your core.

Thinking about Britain, would we really prefer to have those data centres as opposed to housing people?

Are those pretty (or not so pretty) cat images, and there are millions or billions across the world, really worth more than trying to prevent people from being displaced or at least helping them if they are displaced by climate change?

These are real choices.  They are choices we are making implicitly, but they are the choices we are making.

So what are our priorities when we look at  AI and our use of AI?

Amongst all those data centres and investment, there will be a proportion of it, which is for those really good uses, such as health and pharmaceutical development.  I haven’t been able to find figures, however I’m going to guess that at least 90% is not for this, but for producing cat images and the like.

Is this really the world that we want?

Coming next …

Part 4 – why is this happening?

Network externalities, the way one person’s use of AI and digital tech changes its value for others, creates positive feedback loops, leading to runaway growth and emergent monopolies, the nemesis of free markets.

Update

Since the talk in January Google DeepMind produced a paper on large scale experiments on AI manipulation [AE26], and a Guardian article reported on real life examples where AI agents deceived or manipulated their users, including one agent deleting hundreds of emails and later saying sorry [Bo26]. So maybe I’m being a bit too blazé about AI taking over the world!

A couple of weeks further on and a report that Cursor, an industry standard agent based on Claude, wiped not just the code repository of SaaS startup, PocketOS, but also three months of backups including customer data [Al26].  Like the earlier reports, it ‘knew’ that it was doing the wrong thing and ignoring guardrails, but did it anyway.

References

[AE26] Canfer Akbulut, Rasmi Elasmar, Abhishek Roy, Anthony Payne, Priyanka Suresh, Lujain Ibrahim, Seliem El-Sayed, Charvi Rastogi, Ashyana Kachra, Will Hawkins, Kristian Lum and Laura Weidinger (2026). Evaluating Language Models for Harmful Manipulation. arXiv preprint, 26 Mar 2026. https://arxiv.org/abs/2603.25326

[Al26]  Tom Allen (2026). AI coding agent goes rogue, deletes company database in nine seconds.  Computing, 29 April 2026. https://www.computing.co.uk/news/2026/ai/ai-coding-agent-goes-rogue

[AB10] Anthony Atkinson and Andrea Brandolini (2010). On analyzing the world distribution of income. The World Bank Economic Review 24.1 (2010): 1-37.   https://doi.org/10.1093/wber/lhp020

[Be23] Samuel Bendett (2023). Roles and implications of AI in the Russian–Ukrainian conflict. Russia Matters, Harvard Kennedy School (20 July 2023). https://www.russiamatters.org/analysis/rolesand-implications-ai-russian-ukrainian-conflict

[Bo26] Robert Booth (2026). Number of AI chatbots ignoring human instructions increasing, study says. The Guardian, 27 Mar 2026. https://www.theguardian.com/technology/2026/mar/27/number-of-ai-chatbots-ignoring-human-instructions-increasing-study-says

[Cr25]  Laura Cress (2025). New homes delayed by ‘energy-hungry’ data centres. BBC News. 3 Dec. 2025.  https://www.bbc.co.uk/news/articles/c0mpr1mvwj3o

[DM23] Harry Davies, Bethan McKernan, and Dan Sabbagh (2023). ‘The Gospel’: How Israel uses AI to select bombing targets in Gaza. The Guardian, 1 Dec. 2023. https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets

[Do26] Aisha Down (2026). Leading AI expert delays timeline for its possible destruction of humanity.  The Guardian, Tue 6 Jan 2026 https://www.theguardian.com/technology/2026/jan/06/leading-ai-expert-delays-timeline-possible-destruction-humanity

[Fo25] Daniel Foelber (2025). Just 1 Stock Market Sector Now Makes Up 34% of the S&P 500. Here’s What It Means for Your Investment Portfolio. The Motley Fool. Sep 18, 2025. https://www.fool.com/investing/2025/09/18/tech-sector-growth-stocks-sp-500-invest-portfolio/

[JM25] Lily Jamali, Liv McMahon, and Osmond Chia (2025). Elon Musk’s $1tn pay deal approved by Tesla shareholders. BBC News, 6 November 2025. https://www.bbc.co.uk/news/articles/cwyk6kvyxvzo

[LA25] London Assembly (2025). Gridlocked: how planning can ease London’s electricity constraints.  1 Dec. 2025. https://www.london.gov.uk/who-we-are/what-london-assembly-does/london-assembly-work/london-assembly-publications/gridlocked-how-planning-can-ease-londons-electricity-constraints

[Mi78] James Mirrlees (1978).  Social benefit-cost analysis and the distribution of income.  World Development 6.2 (1978): 131-138.  https://doi.org/10.1016/0305-750X(78)90003-7

[Ma24] Murgia, Madhumita (2024). Code dependent: Living in the shadow of AI. Pan Macmillan.

[NG25] Jesse Noffsinger, Maria Goodpaster, Mark Patel, Haley Chang, Pankaj Sachdeva and Arjita Bhan (2025). The cost of compute: A $7 trillion race to scale data centers. McKinsey Quarterly. April 28, 2025. https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-cost-of-compute-a-7-trillion-dollar-race-to-scale-data-centers

[OC25] James O’Donnell and Casey Crownhart (2025). We did the math on AI’s energy footprint. Here’s the story you haven’t heard. MIT Technology Review. May 20, 2025. https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/

[OECD26] OECD (2026). 26AI firms capture 61% of global venture capital in 2025. Organisation for Economic Co-operation and Development, Newsroom, 17 February 2026. https://www.oecd.org/en/about/news/announcements/2026/02/ai-firms-capture-61-percent-of-global-venture-capital-in-2025.html

[ST25] Petra Stock and Josh Taylor (2025).  Datacentres demand huge amounts of electricity. Could they derail Australia’s net zero ambitions?  The Guardian. 2 Dec 2025. https://www.theguardian.com/australia-news/2025/dec/03/datacentres-demand-huge-amounts-of-electricity-could-they-derail-australias-net-zero-ambitions

[UN25] UNEP (2025). Adaptation Gap Report 2025.  UN Environment Progeramme.29 Oct. 2025. https://www.unep.org/resources/adaptation-gap-report-2025

[VG26] Adam Vaughan and Emily Gosden (2026).  AI data centre surge would put UK’s climate change targets at risk. The Times, 23 February 2026. https://www.thetimes.com/uk/environment/article/ai-data-centres-uk-climate-change-7l5bwnmtd

 

The Abomination of AI – part 2 – the impact of AI

The obvious impact of AI is in the things it does directly.  Some technologies also change the very nature of society, affecting even those who do not use them.  Cars are an obvious example.  AI is also such a technology.

This is the second of a series of blogs based on my keynote “The abomination of AI” at ICoSCI 2026.  Each has an accompanying segment of the video and slides from the talk as well as detailed notes and references.  Section numbers refer to the full report which will be released in the final blog.   The slide thumbnails in the text correspond to the slides in the navigation panel below.  The presentation can be played below, or opened full screen. The full length video, complete slides and further information can be found at: https://alandix.com/academic/talks/ICOSCI-2026-abomination-of-AI/

Previously …

§1.  Every industry is driven by profits and power, but there is something about the nature of AI itself, which interacts with the nature of market forces in the world that is problematic and is different from other technologies.

§2.  Can any technology be neutral?  AI can be used for good purposes, such as advances in healthcare.  It can also have bad outcomes such as bias in the criminal justice system or online exploitative pornography.  Perhaps most often it is creating the frivolous or even ugly.

 

3.  The Impact of AI

3.1  What AI does

Okay, so these good, bad and ugly/frivolous are things that AI does, the direct application of AI in various areas.

When I design an application using AI, I might use it well or I might use it badly.  This is clearly an important issue when we examine our own use of AI and other people’s use of AI, especially if we are involved in developing AI or developing the user interfaces that employ AI or provide AI for other people.

 

3.2  How AI shapes society

However, with any technology, there’s something that can be more important than what it does.

Some kinds of technology only have an impact where they are used directly.  If I use a nail to connect two pieces of wood, it doesn’t really have a great effect beyond the thing I’m actually constructing.

But some kinds of technology fundamentally reshape the nature of society.  Not every technology does this, but some do, and when this happens, it has a far bigger effect than the direct application of the technology in particular areas.

AI is just such a technology.   When you are using AI for a purpose, you might change your mind and choose to use something else.  When society has been changed by AI, everybody, even those who do not choose to use AI at all, are affected by it.  This is happening now.

 

3.3  How cars have shaped society

Image: By Remi Jouan – Photo taken by Remi Jouan, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=7245143

To help understand this large-scale process, before looking at the societal impact of AI itself,  let’s first look at another technology that has fundamentally reshaped society – the car.

There are positive things cars do. When you get into a car, it does things for you. It helps you get from A to B, keeps you dry, perhaps gives you a sense of independence.

There are also negative things it does. You might have an accident.  If you are not a law-abiding citizen, you might speed, you might, you might drink alcohol or take drugs and then have accidents and injure other people.

These are things we do as an individual with a car.  You may also be indirectly affected if you don’t have a car, for example if you are a pedestrian, you might still be involved in a car accident.  However, by and large these are about things you choose to do.

However, irrespective whether you choose to use cars or not, the whole physical and economic nature of society is shaped by the car and by the internal combustion engine.   Cities have road networks that allow people to get in and out.  This leads to urban sprawl at the edge of the cities along the lines of connection. Because of this organisation, shops and services are placed at car distances away.  So if you don’t have a car (and 84% of the world’s population don’t [MS24].), it becomes difficult to access things.  You find yourself poorer in a sense, more disadvantaged than you would have been because of the actions of other people – car poverty.

Economists talk about externalities, the fact that when I do something, it affects others who aren’t directly doing it [LM02].  The emergence of car poverty is one of the externalities of car use.   Of course there are other externalities like global warming from the petrol engines themselves and pollution [EP19].  Even electric cars produce all sorts of nasty particles from the wear of tyres on the road.

These things are so woven into the fabric of society that is is very hard to break away from them. For example, there have been amazing advances in autonomous vehicles, but really, trying to design a car that drives itself is a bit of a stupid thing to do.  Why not just have, better trains and metros that work far more easily with automation?  But of course, our whole infrastructure is organised around roads and cars.  Therefore, when you want to do something new, you have to fit within it.

This societal structure changes things dramatically, much more than the direct impact.

Coming next …

Part 3 – a different kind of apocalypse
Doomsayers worry about the point when AI becomes sentient, outgrowing its creators.  The real danger is more insidious: the massive financial and human impacts of AI seem almost obscene.

.

 

References

[EP19]  European Parliament (2019). CO2 emissions from cars: facts and figures (infographics). European Parliament. https://www.europarl.europa.eu/news/en/headlines/society/ 20190313STO31218/co2-emissions-from-cars-facts-and-figures-infographics

[LM02] Stan Liebowitz and Stephen Margolis (2002). Network effects and externalities. In The new Palgrave dictionary of economics and the law. Palgrave Macmillan. pp.1329–1333.

[MS24] Miner, P., Smith, B. M., Jani, A., McNeill, G., & Gathorne-Hardy, A. (2024). Car harm: A global review of automobility’s harm to people and the environment. Journal of Transport Geography, 115, 103817.  https://doi.org/10.1016/j.jtrangeo.2024.103817

 

The Abomination of AI – part 1 – setting the scene

AI can be used for good or bad purposes as well as frivolous time wasting!  However, there are also more large-scale impact of AI as it interacts badly with the processes of the global free market simultaneously amplifying the least satisfactory aspects of the free market and at the same time undermining the fundamental assumptions of of market economics.  The resulting runaway effects pose an existential risk to democracy and human dignity.

This is the first of a series of blogs based on my keynote “The abomination of AI” at ICoSCI 2026.  Each has an accompanying segment of the video and slides from the talk as well as detailed notes and references. Section numbers refer to the full report which will be released in the final blog.   The slide thumbnails in the text correspond to the slides in the navigation panel below.  The presentation can be played below, or opened full screen. The full length video, complete slides and further information can be found at: https://alandix.com/academic/talks/ICOSCI-2026-abomination-of-AI/

AI can be used for tremendous good, not least in medicine, as well as frivolous and dangerous uses, such as exploitative online pornography.  However, it also has large scale structural impacts on the very nature of our world.  The levels of financial investment in AI development and the financial and environmental costs of data centres, can seem obscene, especially as climate change and political instability is threatening to tear down the apparent stability of the late 20th century.  AI has intensified some of the feedback effects of digital technology creating unprecedented emergent monopolies, that leave nations as well as individuals feeling all but powerless.  These are huge issues, and ones that countries, including Malaysia, are struggling to cope with.  However, there are also positive actions we can take as researchers and designers to ameliorate some of the problems and in the process create better and more resilient products that really serve people.

1.  Introduction

The word ‘abomination’ is not widely used, and sounds apocalyptic, often with religious connotations.  Here I’m using it in its broader sense of something that is awful to the point of being at the edge of evil.

And that sounds a very strong thing to say about AI itself.  In fact I’m taking more about the AI industry, but not simply the fact that it is an industry governed by profits and power, that is true of many industries such as oil or plastics.  AI is special.  There is something about the nature of AI itself, which interacts with the nature of market forces in the world that is problematic and is different from other technologies.

I’ve touched upon this issue before in other talks and writing, but this is the first time I’ve focused on it centrally.

1.1  Projects and People

The ideas hare are closely related to two projects, one past, one current.  First is Not-Equal (https://not-equal.tech), which was an EPSRC Network Grant finding a programme of work related to the digital economy and social justice [CC25]; I led the algorithmic social justice strand. Clara Crivellaro, who was the overall project lead, and I are in the process of writing a book on AI for Social Justice [CD27] in the CRC/T&F AI for Everything series.  Then issues in this talk will form part of one of the chapters in this.

Second is an EU Horizon project TANGO (https://tango-horizon.eu/) investigating human machine decision making.  This is very much looking at the ways in which AI can be used more positively in specific systems and decision making situations, including public policy.  However less about the macro-economic issues in this talk.

2.  Neutral Technology?

So there is a sort of a myth that technology is neutral.  As researchers, particularly in university, you do your work and come up with new ideas or technology, but how it’s used is up to other people.  It’s up to the politicians; it’s up to industry – not for us to worry about.  This idea of technology neutrality has been heavily critiqued over the years: saying, “we just gave them the guns, we didn’t pull the trigger”, just doesn’t sound convincing!

Of course there is some truth in the neutrality.  Most technologies can be used in good ways or bad ways, but for some technologies, say nerve poisons, there is clearly some aspects that drive it one way rather than another.

The title ‘abomination of AI’ sounds very negative, but at the scale of individual applications of technology, is certainly not like nerve poison!  AI can be used in good ways and bad ways, just like pretty much any technology.  So while, this talk is focusing on certain intrinsic dangers of AI, I certainly don’t think everything about AI is bad, otherwise I wouldn’t be writing textbooks about it.

The dangers I’ll be highlighting are at a macroeconomic scale, and are pretty negative, so after discussing these we’ll return to some of the constructive things that you can do within your discipline or work to help ameliorate some of the bad things.

Before that, let’s look at the smaller scale of individual applications of AI, good, bad and …

 

2.1  The Good – health and UX

Images: [NF24],  CSBIOPASSION, CC BY-SA 4.0
<https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons.  https://commons.wikimedia.org/wiki/File:C12orf29_AlphaFold.png

There are clearly some wonderful things being achieved with AI, not least some of the amazing advances in medicine and health that have been happening because of AI.  You may recall the 2024 Nobel Prize for chemistry was shared between a chemist and two AI researchers [NF24]; the latter for their role in developing AlphaFold which has revolutionised protein synthesis  [JE21].

Closer to home, in my book AI for HCI  [Dx26b] I look at the ways AI can help in user interface design and creating better computer systems for people

 

 

2.2  The Bad

Bias and discrimination

Paper: [Dx92]

Back in 1992, I first wrote about the dangers of ethnic, gender and social bias in particular in black box machine learning algorithms [Dx92].  To be honest, at that point, I thought it was going become a real issue in the next few years.  However, that was just before the big AI winter, so in fact, it got put off for 25 years or so.

Paper: [Dx92] Images: [Da21,Gl21,Ma21,Bu21]

But now, of course,  bias is a really critical issue often in the press, including problems with facial recognition systems : [Da21,Gl21,Ma21,Bu21].  In the US court system there is extensive controversy about the use of systems that recommend whether you give people parole or not [AL16,LM16].

 

Online exploitative pornography

Images: [CH26,MC26]

Another issue that has been hot in the press is the use of online platforms to produce exploitative pornography using AI.  While the UK was still wringing its hands deciding what to do, Malaysia and Indonesia led the world banning Grok [CH26,MC26].  .  Even for a country, standing up to industries as big as X and Elon Musk is no small thing. In fact Musk did partially backtracked on Grok, and while still limited, it does show that the global steamroller of AI is not inevitable.

 

2.3  The Ugly … or simply frivolous

Image: [Wa24]

So there are some really good uses of AI and some bad ones, but for the general public, the majority, while not always ugly are at best frivolous.  The world is filled with images of cats on skateboards, cats dancing, albeit not all as ugly as the Chubby TikTok craze [Wa24]!  You have almost certainly seen some AI generated cat images or videos, and they are often quite sweet, like cartoons emphasising the things we find appealing – large-eyed cuddly pets doing cute things.

This is not bad, it’s just frivolous.  And frivolous can be good, indeed fun is important for a full life and has been studied in HCI [BM18] including my own work on Christmas Crackers [Dx18]. We pay to go to the circus, watch a comedy film or buy a toy for a child.  But maybe there is a point when the sheer volume and cost of frivolity is excessive?

Coming next …

Part 2 – the impact of AI

The obvious impact of AI is in the things it does directly.  Some technologies also change the very nature of society, affecting even those who do not use them.  Cars are an obvious example.  AI is such a technology.

 

References

[AL16] Angwin, J., Larson, J., Mattu, S. and Kirchner, L. (2016). Machine bias there’s software used across the country to predict future criminals: And it’s biased against blacks. ProPublica (23 May 2016). https://www.propublica.org/article/machine-bias-risk-assessments-incriminal-sentencing.

[BM18] Mark Blythe and Andrew Monk (2018).  Funology 2: Critique, ideation and directions.” Funology 2: From Usability to Enjoyment. Cham: Springer.

[Bu21] Sarah Butler (2021). Uber facing new UK driver claims of racial discrimination. The Guardian, 6 Oct 2021. https://www.theguardian.com/technology/2021/oct/06/uber-facing-new-uk-driver-claims-of-racial-discrimination

[CH26] Osmond Chia and Silvano Hajid (2026). Malaysia and Indonesia block Musk’s Grok over explicit deepfakes. BBC News. 12 January 2026. https://www.bbc.co.uk/news/articles/cg7y10xm4x2o

[CC25] Clara Crivellaros, Lizzie Coles-Kemp, Alan Dix, and Ann Light (2025). Co-creating conditions for social justice in digital societies: modes of resistance in HCI collaborative endeavors and evolving socio-technical landscapes. ACM Transactions on Computer-Human Interaction. Vol. 32(2), Article No:15, pp.1–40  https://doi.org/10.1145/3711840

[CD27] Clara Crivellaro and Alan Dix [2027]. AI for Social Justice. CRC Press, in preparation. https://alandix.com/ai4sj/

[Da21] Nicola Davis (2021).  From oximeters to AI, where bias in medical devices may lurk. The Guardian, 21 Nov 2021. https://www.theguardian.com/society/2021/nov/21/from-oximeters-to-ai-where-bias-in-medical-devices-may-lurk

[Dx92] A. Dix (1992).  Human issues in the use of pattern recognition techniques. In Neural Networks and Pattern Recognition in Human Computer Interaction Eds. R. Beale and J. Finlay. Ellis Horwood. 429-451.  https://alandix.com/academic/papers/neuro92/

[Dx18] A. Dix (2018). Deconstructing Experience: Pulling Crackers Apart. In: Blythe, M., Monk, A. (eds) Funology 2. Human–Computer Interaction Series. Springer, Cham. https://doi.org/10.1007/978-3-319-68213-6_29

[Dx26b] A. Dix. (2026). AI for Human–Computer Interaction. CRC Press. (in press). https://alandix.com/ai4hci/

[Gl21] Jessica Glenza (2021). Minneapolis poised to ban facial recognition for police use. The Guardian, 12 Feb 2021. https://www.theguardian.com/us-news/2021/feb/12/minneapolis-police-facial-recognition-software

[JE21]  Jumper, J., Evans, R., et al. Highly accurate protein structure prediction with AlphaFold. Nature 596, 583–589 (2021). https://doi.org/10.1038/s41586-021-03819-2

[LM16] Larson, J., Mattu, S., Kirchner, L. and Angwin, J. (2016). How we analyzed the COMPAS recidivism algorithm. ProPublica, 23 May 2016. https://www.propublica.org/article/how-weanalyzed-the-compas-recidivism-algorithm

[Ma21] Jyoti Madhusoodanan (2021). These apps say they can detect cancer. But are they only for white people?  The Guardian,  28 Aug 2021. https://www.theguardian.com/us-news/2021/aug/28/ai-apps-skin-cancer-algorithms-darker

[MC26] Liv McMahon and Laura Cress (2026). X could face UK ban over deepfakes, minister says. BBC News 9 January 2026. https://www.bbc.co.uk/news/articles/c99kn52nx9do

[NF24]  The Nobel Foundation (2024). The Nobel Prize in Chemistry 2024. NobelPrize.org. Nobel Prize Outreach 2025. Sat. 17 May 2025.  https://www.nobelprize.org/prizes/chemistry/2024/summary/

[Wa24] Aidan Walker (2024). The unstoppable rise of Chubby: Why TikTok’s AI-generated cat could be the future of the internet. BBC, 20th August 2024.  https://www.bbc.co.uk/future/article/20240819-why-these-ai-cat-videos-may-be-the-internets-future

 

 

 

Universities and Covid – how bad was it and what next?

A record number of students have been heading to universities over the last few weeks.  They will still face Covid-restriction, however, happily the situation will be nothing like last year.

Last year I had my own concerns early on, and in retrospect it is easier to assess just how bad things were.  Combining SAGE’s Sept 2020 estimates of the impact with actual Covid mortality would suggest that during 2020-2021 there was an additional death for every 50-100 university students educated. There are arguments to reduce this figure somewhat; however, it is still clear that society at large paid heavily to enable education to continue.

Happily, this year vaccination has vastly reduced mortality, albeit set against very high case numbers. Although things will be more ‘normal’ this year, as a sector, we are still clearly deeply indebted to the rest of society and need to do all we can to minimise further impact.

The data – how bad was it?

Early in the summer of 2020 I estimated that the potential impact of autumn University return would be to at least double the number of Covid cases unless major action was taken to mitigate the risks.  Based on figures for the first wave and projections for 2020-2021 winter, I put the figure at around 50,000 deaths.

At the time this was derided as heavily pessimistic, but of course within months SAGE modelling estimates came out with far higher figures.  SAGE’s  “Summary of the effectiveness and harms of different non-pharmaceutical interventions, 21 September 2020” estimated that without substantial mitigation, university return in 2020 would lead to an increase in R of between 0.2 and 0.5, which corresponds to not just double, but between eight to sixty times as many cases over the first term.

This was all based on modelling, but the impact was evident in actual case data as Universities returned. This was particularly clear in Scotland as universities returned in mid-September where there was an almost instant doubling of infections in the university age group, which then fed into other cohorts over the succeeding weeks.

As well as more local measures, the Universities Scotland issued guidance for the weekend of 25-27 Sept 2020 asking students to avoid socialising outside their households and avoiding bars and other such venues.

In the rest of the UK the data was a little less clear as university return dates are more staggered, but there was a clear step change at the beginning of October 2020.

In Newcastle the local newspaper analysed national data and found that areas with high student density had Covid rates five times higher than areas with few students.  More anecdotally, we will all remember the images of students’ messages on their windows as halls went into effective lock-in, and the (rapidly removed) fencing around Manchester halls of residence.

This initial surge was due to the combination of simply lots of people coming together and establishing new contact networks, a known Covid risk, and the more obvious effect of start-of-term parties and ‘freshers week’ high spirits.

It is far harder to assess more long-term impacts during the year, as this simply added to the general societal growth.  Modelling can be used to attempt to disentangle these effects, but it is difficult to definitively separate effects of coupled dynamic systems  except during periods of sudden change.  There were noticeable end-of-year spikes in student areas of Leeds reported in June, but that, like the year start, was more about end of term parties, not the general effect of increased contact networks.

Mitigations – it could have been worse

SAGE’s figures, like my own, were for University return without mitigations. and they suggested potential actions to reduce the impact, some of which were headed.

Every university made very strong efforts to reduce spread within teaching environments, whilst still offering levels of in-person activities, but it was, and still is, the social side of student life that was expected to be most problematic.

Anticipating the mixing during Freshers Week, my own University and I know many others, created outdoor bars and activities in order to create spaces that were safer and less likely to lead to cross infection.  This was effective in that the majority of traced ‘superspreader’-style outbreaks seemed to be related to off-campus parties or events.

Students also took matters into their own hands.  For every highly publicised case of wild parties and ignoring of Covid rules, I heard other less highly published accounts of students effectively permanently isolating themselves in their rooms.  I also know of universities where courses that started off in hybrid mode with a mix of in-person and remote activities ended up abandoning the in-person elements as students effectively voted with their feet. I think this was principally the case for universities with a large number of local students, but also some students simply returned home and completed their studies remotely.

But students are young, so not at risk

One of the difficulties when thinking both about universities and schools, is that Covid is not particularly dangerous for those in their teens and twenties.  This is not to say no risk for pupils and students, especially for anyone with other health problems.  There is of course more risk for academics and teachers, and even more other staff such as cleaners, security and catering, who typically have older demographics than teachers and academics, but still the risk for working age adults was always smaller.

The biggest problem was, and still is, the spread into the community as a whole.  The Scottish data for last autumn showed this indeed did happen within weeks.  This is partly due to out-of-house contacts such as buses and shops, and partly due to home visits (for away-from-home students) and local students living at home.

These contacts then seed others and these indirect contacts, contacts of contacts, etc. far exceed the number of initial cases, and furthermore ended up spread over all demographics of society including the most vulnerable.  When the disease is near static (R ~ 0.9–1.1) this leads to around 10 additional cases for each initial case over a 2-3 month window, higher during times of higher growth.  While universities actively published the number of actual student and staff cases, these were the relatively safe tip of a far more deadly iceberg.

Last year, before the vaccine and new variants, these knock-on infections meant that each preventable infection would have a one in ten chance of causing an eventual death (see “More than R – how we underestimate the impact of Covid-19 infection” for the details of this figure).  At our current mid-vaccine stage, but with delta, the figure is about one in fifty – still far higher than any of the common risks we impose upon one another such as car driving, second-hand smoking or general pollution.

What about variants?

While the data suggests that at least half of the cases during the autumn of 2020 were due to university returns, the original Covid variant was overtaken first of all by alpha variant and then by delta variant.  There is thus an argument that only the deaths due to the original variant be counted, that is perhaps 10,000 deaths rather than 40-50,000.

For the delta variant this is undoubtedly the case; it quickly overcame the original variant and so the number of cases before the delta variant emerged are largely irrelevant to those that came after.  However, delta only emerged in the UK as the second wave decayed and after the majority of deaths, so it makes little difference to the overall tally.

Alpha is more complex.  Nearly all second wave deaths were due to alpha, and these constitute the larger part of winter 2020–2021 Covid deaths.

It is almost certain that alpha developed in the UK.  It could be that it developed in a person who would have been infected anyway irrespective of the universities.  If so then only around a half of pre-Christmas deaths should be attributed to the universities. However, if it developed as a mutation in someone who would not have been otherwise infected, not only all of the alpha variant UK deaths, but also all alpha variant deaths worldwide would land at our doorstep.

There is no way of knowing, but the odds as to which of these is the case run exactly with the proportion of cases due to the universities, so the best estimate is still to count that proportion of UK deaths and in principle a proportion of worldwide alpha-variant deaths also, but I don’t have the heart to calculate that figure, only knowing it is a lot, lot higher.

Why not blame schools?

Arguably, it is unfair to pin the increase entirely on the universities.

According to the SAGE estimates in Sept 2020, the two largest potential drivers of Covid were schools and universities.  Each were expected to lead to increases in R of 0.2 to 0.5. That is, if universities had returned but schools not reopened, while the universities would have still doubled the number of cases, this would have doubling a smaller number.  Given both schools and universities have similar figures then maybe it would be more fair to divide the combined impact between them, leading to maybe 3/8 of cases being assigned to each rather than half the cases to the universities.

This is a tenable argument, and indeed it is always hard to apportion blame or cost when faced with multiple causes that lead to non-linear effects.

Personally, I discount this.  First because it doesn’t make so much difference, 3/8 of a big number is still very large.  Second there were far stronger arguments for reopening schools: (i) because being more local to start with it was easier to mitigate their impact; (ii) because school children are younger it is harder for them to cope with remote learning, and (iii) because reopening schools freed up parents from childcare allowing other sectors of the economy to recover.  However, if you disagree knock a quarter off all of the figures for the impact of universities.

Maybe not so bad – lockdowns and government policy

Finally, while the bald figure of one death for every 50 to 100 students educated is frighteningly large, there is I think there is a good argument to reduce this substantially, albeit opening up the issue of wider non-mortality costs for society.

Last autumn Covid cases were increasing rapidly and the UK government was set against any further control measures.  Eventually it was forced to instigate a November lockdown across England after the earlier Wales ‘firebreak’.  The trigger for this was not the cases per se, but the danger of overwhelming the NHS ability to cope.

Those on the front-line of the NHS would debate how close we got to breakdown, and indeed whether in many ways we went beyond it.  However, crucially the driver of policy has been not Covid cases as such, nor even Covid deaths, but the number of hospital and especially intensive care admissions.

If Covid cases had been only half as high, there might not have been a pre-alpha lockdown at all before Christmas, or if there had been it would have been later as would the January lockdown.

By this argument, which I believe is a sound one, the impact of last year’s universities reopening was to accelerate growth, leading to earlier and longer lockdowns.  The increase in university-attributable deaths would by this argument still not be negligible, but lower, maybe less than 10,000 (about one for every 250 students educated).  However, this is then offset against the additional strain put on the rest of society, not least on the jobs of the other 50% of 18-21 year olds who don’t go to university.

In summary

First of all, it should be noted that there will be a further hit as universities return now, and a recent Times Higher survey reported that more than half of lecturers had serious concerns about the new term. However, the corresponding figures for this year will be an order of magnitude lower.  This does not mean we should not take every precaution possible, Covid deaths are still at levels that would be inconceivable if we hadn’t seen them so much higher previously.  At the time of writing, there are as many deaths due to Covid in two weeks as a whole year’s worth of road deaths.

As is probably evident, certainly from previous writing about the issue, I believe the decision to reopen the HE sector in Autumn 2020 was fundamentally wrong.  As I have previously argued, the universities’ hands were largely tied, as were to a lesser extent the devolved governments, by decisions taken at Westminster.  I assume that these decisions were partly party political (not wanting to alienate half of first-time voters) and partly financial (reducing the need to prop up the HE sector groaning under the increased costs of dealing with remote teaching).

The result of this was a worst of all possible worlds: bad for students who often ended up paying for semi-useless accommodation and being taught remotely during lockdowns anyway; bad for lecturers trying to cope with mixed models of teaching and the uncertainty of constantly switching of models; and bad for society deepening both the health and economic crisis.

Possibly saying that the universities’ hands were tied by government and that in turn as an employee of the university I was just continuing to do my job is a version of the concentration-camp guard excuse.  Personally I feel the weight of this: I knew what was unfolding, I had written about it, but could I have done more to raise the issue?

Looking forward we can still make a difference.

I’m part of the Not-Equal research network focused on issues social justice in the digital economy.  We are coming to the end of our funded period and had originally hoped to have an in-person end-of-project event bringing together the many academics and third-sector stake-holders who have been part of the network to share experiences and maybe create new partnerships looking forward.  During the summer, after consulting with our advisory board, we unanimously decided to instead have a purely virtual event.  Meeting together would have clearly had great advantages, but it felt that holding such an event, however worthy would be irresponsible.

Each such decision only makes a small difference, but it is the tens of thousands of such small acts that make a big difference.  This has been one of the hard to comprehend lessons of Covid, but one that will continue to be important as we shift our focus back towards other massive issues of poverty, social injustice, climate change and the myriad diseases other than Covid that plague so many in the world.