The Abomination of AI – part 8 – summary and recap

This final post recaps what we’ve learnt about the runaway nature of the AI industry, how it undermines free markets, and how we can make a difference. The core question is not what can AI do, but what should AI do?

This is the last of the series of blogs based on my keynote “The abomination of AI” at ICoSCI 2026.  Each has an accompanying segment of the video and slides from the talk as well as detailed notes and references.  Section numbers refer to the full report which will be released in the final blog.   The slide thumbnails in the text correspond to the slides in the navigation panel below.  The presentation can be played below, or opened full screen. The full length video, complete slides and further information can be found at: https://alandix.com/academic/talks/ICOSCI-2026-abomination-of-AI/

Previously …

§1.  Every industry is driven by profits and power, but there is something about the nature of AI itself, which interacts with the nature of market forces in the world that is problematic and is different from other technologies.

§2.  Can any technology be neutral?  AI can be used for good purposes, such as advances in healthcare.  It can also have bad outcomes such as bias in the criminal justice system or online exploitative pornography.  Perhaps most often it is creating the frivolous or even ugly.

§3.  The obvious impact of AI is in the things it does directly. Some technologies also change the very nature of society, affecting even those who do not use them. Cars are an obvious example.  AI is also such a technology.

§4.  Doomsayers worry about the point when AI becomes sentient, outgrowing its creators.  The real danger is more insidious: the massive financial and human impacts of AI seem almost obscene.

§5.   Network externalities, the way one person’s use of AI and digital tech changes its value for others, creates positive feedback loops, leading to runaway growth and emergent monopolies, the nemesis of free markets. This the very nature of digital technology and AI breaks free markets leading to runaway inequality, even with the best intentions of industry … but some tech companies further exploit these effects.

§6.   Runaway growth of AI is not painless – opportunity costs of investment and human costs of lost jobs.  Gains may be transitory – buy-now-pay-later tech risk tying users into spiralling costs.

§7.   It all seems too big, requiring national and international responses.  But we can make a difference using appropriately chosen small AI (including none). Plus, this good use of AI is good for business too.

8.  In summary

So, in summary,  AI can do amazing good things, but often also bad things.

More crucial is how AI is shaping society.  We have to think explicitly about this, because AI has its own dynamic.  That dynamic is not good by its nature, so we have to control it to make it serve society. Some of this needs action at a governmental and intergovernmental level, because this is so big compared to most countries.

However there are things you can do.  You can choose to sometimes not use AI or use small AI, but always to try to use AI appropriately rather than just throwing it at a problem and wiping your hands of the wider impact.

The core question is not to think solely about what can AI do?  While it still does some things frighteningly badly, AI is getting better and better at doing more and more.

But the big question, the Big, BIG question is what should AI do?

And that’s the question we need to ask ourselves continually both in our individual work and at societal level.

AI will be an abomination, but only if we let it be.

Updates

It is now four months since I gave the talk ICoSCI 2026 on which this blog series has been based.  In that short time there have been many changes, some strengthening the arguments and some challenging them.   In addition, I’ve had helpful feedback from several people, especially extensive comments by Mark Bernstein; so many thanks to Mark and others who have engaged with this series.

I’ve written updates at the end of several blogs.  In part 3 “A different kind of apocalypse” recent reports of agentic AI ignoring guardrails make Terminator-style AI devastation, seem less distant.  Of course, since those updates the publicity around Claude Mythos’ ability to find bugs in established codebases meant that Anthropic deemed it too dangerous to release without allowing selected partners to use it first to check their own security [An26a].  While this may be in part a PR exercise, it is being taken seriously by government and pan-government organisations [AISI26,Go26].  Furthermore, as well as these external threats, Anthropic are also monitoring for the potential of AI ‘sabotage’, that is:

when an AI model with access to powerful affordances within an organization uses its affordances to autonomously exploit, manipulate, or tamper with that organization’s systems or decision-making in a way that raises the risk of future catastrophic outcomes”  [An26b]

Updates at the end of part 6 “should we worry?“ reinforce the difficulty of switching AI models and the way OpenClaw has emphasised the under-pricing of AI use plans, and hence the way that these might adjust (upwards!) over time, just as we’ve seen with other forms of digital technology.  At the end of part 7 “what can we do?” there is a lovely example of really smart AI, combining AI and plain old computing to achieve better and cheaper outcomes.

In addition to these updates, recent developments (since the updates!) and comments have raised a couple of issues that I’d like to address.

Size matters

Since giving the talk in January, there have been further developments in reducing the costs of AI, not least DeepSeek’s V4 release, which has focused on making model training and execution more efficient [DS26].  Nvidia have released open source models designed to run on local small-scale (as in not more than a few $1000 Nvidia chips) installations [Ca26,Br26], some of these are designed for specialised applications, but some more general purpose; it could be that they are positioning themselves to spread their market beyond the small number of AI software mega-corporations, but in the process may weaken the emergent monopolies of these software players.  However Nvidia’s own near monopoly AI hardware position is being challenged by DeepSeek’s use of Huawei chips [CC26].

This seems to suggest several potential scenarios:

  1. The big players (OpenAI, Anthropic) see off the cheaper, but less powerful alternatives, maybe using market dominance to retain near monopoly positions as discussed in section 5 (blog part 4 and part 5)
  2. The lean, mean models become powerful enough to open up the market fully, so that the current mega companies and their investors lose their ‘bet’ on market dominance, leading to massive drops in their market values, and potentially a major stock exchange crash.
  3. The cheaper models become viable alternatives, but do not immediately compete on shear power and corporate commitment leaving the mega AI corporations strong, but less all encompassing, and making the individual solution strategies in part 7 easier.

The impact of (2) on the global economy would be pretty disastrous, especially following the massive hits of US-Israel/Iran and Ukraine/Russia wars, so, on balance, the softer movement of scenario (3) feels like the best outcome.

Perversely, the big AI companies would be likely to weather the storm of (2) as the investment already committed provides a cash buffer, in the same way that tech companies during the dot-com period which with second/third round investment before the crash often survived, including Lastminute.com, the IPO if which triggered the market re-evaluation of tech in 2000.  The founders and early funders will see paper devaluations, but otherwise still be in control of huge businesses; the smaller, more recent investors will lose however, including many global pension funds.

Cats and Consummate Consumerism

Section 3 (blog part 2) is, in part, quite dismissive of the vast volume of ‘frivolous’ use of generative AI.  Later, I hope that is clarified (e.g. the example of the Doctor’s Kitchen app) that this does not mean criticising all personal use of AI — appropriate use of AI in can be very beneficial, in particular allowing far more individualised access to digital technology.  Indeed, LLMs are already democratising access to many forms of professional advice that are beyond the reach of individuals and small businesses [Fu26].

However, that does leave the cats.   If that is what people want to create and view, surely that is their business?

In some ways these uses of AI are the ultimate form of consumerism — like the boxfuls of unused plastic toys, kitchen appliances that lie in the dark recess of cupboards, the 1.6 billion items of clothing in UK wardrobes that have never been worn [BBC22] — but now all digital, thrust before us by the relentless algorithms of social media.  Items we never knew we wanted instantly become essential, produced apparently for free and provided in precisely the quantity and kind that makes us want more.

Is this a choice, when the algorithms know how to nudge and channel us [HS26], where LLMs have learnt the lessons of the confidence trickster, and where the content itself is addictive [KK25]?  Is this a free market equivalent of Opium Wars?

For individuals many of the costs are effectively hidden, especially at the point of use.  Just as no fleece wearer or takeaway coffee drinker deliberately chooses to put microplastics in breast milk, the environmental and social impacts of digital and AI products are often physically and temporally distant and in many cases suffered by others [Ma24].

This distancing is in part due to digital communication and in part the diffuse relationship between the loci of production and use, especially when a large proportion of cost is in training.  However, the distancing is in part deliberate, not least the under-pricing of services to build reliance, a trick that has been part of digital products almost since their onset and very much in the playbook of the neighbourhood drug dealer.

One reason for listing the almost unbelievable facts and figures of AI growth in part 3 (§4,2) is to force us to face these choices explicitly.

The speed of change …

As is evident things are moving rapidly. This said although the details are changing, many of the large-scale impacts of AI on society and economics outlined in this series build on longer term trends in digital technology that have been evident at least since the turn of the Millennium.

With so many technologies in the past the societal impacts have only become apparent in hindsight.  With AI there are surprises, especially in terms of its spurts of almost unimaginably rapid progress, but also we are increasingly aware of the dangers and pitfalls.  The issues describe here are part of this conversation, aiming to ensure that we enter this exciting and dangerous time with eyes wide open.

Coming soon …

If you are interested in these issues, look out for the book AI or Social Justice, which Clara Crivellaro and I are currently working on.  The book website already includes a growing collection of resources including case studies and videos. .

References

[AISI26] AI Security Institute (2026) Our evaluation of Claude Mythos Preview’s cyber capabilities. AI Security Institute, Department of Science, Innovation and Technology. Apr 13, 2026. https://www.aisi.gov.uk/blog/our-evaluation-of-claude-mythos-previews-cyber-capabilities

[An26a]  Anthropic (2026).  Project Glasswing: Securing critical software for the AI era. Accessed 4th May 2026. https://www.anthropic.com/glasswing

[An26b]  Anthropic (2026).  Sabotage Risk Report: Claude Opus 4.6. Accessed 4th May 2026.   https://anthropic.com/claude-opus-4-6-risk-report

[BBC22]  BBC News (2022).  UK wardrobes stuffed with unworn clothes, study shows.  BBC News, 7 October 2022. https://www.bbc.co.uk/news/science-environment-63170952

[Br26]  Kari Briski (2026).  NVIDIA Launches Nemotron 3 Nano Omni Model, Unifying Vision, Audio and Language for up to 9x More Efficient AI Agents.  Nvidia blog.  April 28, 2026.  https://blogs.nvidia.com/blog/nemotron-3-nano-omni-multimodal-ai-agents/

[CC26]  Caiwei Chen (2026).  Three reasons why DeepSeek’s new model matters: The long-awaited V4 is more efficient and a win for Chinese chipmakers.  MIT Technology Review, April 24, 2026  https://www.technologyreview.com/2026/04/24/1136422/why-deepseeks-v4-matters/

[Ca26] Bryan Catanzaro (2026). NVIDIA Launches Open Models and Data to Accelerate AI Innovation Across Language, Biology and Robotics.  NVIDAI Blog, October 28, 2025.  https://blogs.nvidia.com/blog/open-models-data-ai/

[DS26]  DeepSeek-AI (2026).  DeepSeek-V4:Towards Highly Efficient Million-Token Context Intelligence.  Accessed 29th April 2026.  https://huggingface.co/deepseek-ai/DeepSeek-V4-Pro/blob/main/DeepSeek_V4.pdf

[Fu26]  Maria Ines Fuenmayor (2026). The Privilege of Refusing AI. Codified, Medum,  Apr 22, 2026.  https://codifiedai.substack.com/p/the-privilege-of-refusing-ai

[Go26]  Gordon M. Goldstein (2026). Six Reasons Claude Mythos Is an Inflection Point for AI—and Global Security.  Council on Foreign Relations, April 15, 2026. https://www.cfr.org/articles/six-reasons-claude-mythos-is-an-inflection-point-for-ai-and-global-security

[HS26]  Kali Hays, Nardine Saad and Regan Morris, (2026).  Campaigners welcome Meta and YouTube’s defeat in landmark social media addiction trial.  BBC News, 25 March 2026.  https://www.bbc.co.uk/news/articles/c747x7gz249o

[KK25]  Kooli, Chokri, Youssef Kooli, and Eya Kooli (2025). Generative artificial intelligence addiction syndrome: A new behavioral disorder?.  Asian Journal of Psychiatry 107:104476.  https://doi.org/10.1016/j.ajp.2025.104476

[Ma24]  Murgia, Madhumita (2024). Code Dependent: How AI Is Changing Our Lives. Picador.

 

Facial recognition — what does accuracy mean?

A Guardian article at the weekend reported on the increasing number of people being ejected from stores after being misidentified by facial recognition systems as past shoplifters [Mu26].   This commercial use of facial regulation has even less oversight than police use, which has also been causing alarm. The people at the centre of the report were eventually offered gift vouchers by the shops concerned, but only after considerable personal embarrassment and lengthy and complex processes to clear their names (or to be precise faces).

According to the article Facewatch, the company providing the facial recognition service, claim a 99.98% accuracy rate.  This sounds high.  Does this mean that the cases reported are rare, albeit unfortunate, incidents?

Let’s unpack this a little.

According to the UK Office of National Statistics annual report on Crime in England and Wales, there are just over half a million cases of shoplifting a year  [ONS26]; the Facewatch web site offers a higher figure of 2 million across the whole UK, maybe attempting to take into account underreporting [FW26].  Let’s use this larger figure.

In the UK there are about 55 million adults, assuming on average of one shop visit per day, that is about 20 billion shopping visits per year.  So that means shoplifting accounts for just one visit in 10,000.1

So, if a facial recognition systems said no-one was a past shoplifter, it would attain 99.99% accuracy!2  If on the other hand the accuracy is equal for shoplifters and non-shoplifters (that is false positive and false negative rates are the same), then there would be one misidentified innocent for every correctly identified shoplifter — hardly rare.  If we use the ONS shoplifting figures, this rises to three misidentifications for each correct one.

One assumes that Facewatch adjusted the system recognition thresholds to have a lower false positive rate (wrongly accused) than this, instead accepting a greater proportion of missed true shoplifters, but in this case an overall 99.98% figure is unachievable.  Most likely the reported figure it is based on training data with, perhaps equal numbers of photos of shoplifters and non-shoplifters (essential to allow effective learning), so the 99.98% accuracy figure refers to this data not the numbers of each encountered in realistic (let alone real) use.

In both this case and others, such as rare disease diagnosis, seemingly high stated accuracy rates may not be as good as they at first seem, and certainly need a lot of context to be meaningful. As is clear this is by no means an abstract mathematical discussion, but one that affects real lives.  In the case of the use of facial recognition, the article also reminds us that these kinds of systems often have lower accuracy rates, and in particular higher false positive rates (that is wrongly accused) for black and asian people and for women in particular.

 

References

[FW26]   Facewath (2026).  Home page. Accessed 4th May 2026.  https://www.facewatch.co.uk

[Mu26]  Jessica Murray.  Guilty until proven innocent: shoppers falsely identified by facial recognition system struggle to clear their names.  The Guardian, 3 May 2026.  https://www.theguardian.com/technology/2026/may/03/guilty-until-proven-innocent-shoppers-falsely-identified-by-facial-recognition-struggle-to-clear-their-name

[ONS26]  Office of National Statistics (2026).  Crime in England and Wales: year ending December 2025.  ONS Centre for Crime and Justice, 23 April 2026.  https://www.ons.gov.uk/peoplepopulationandcommunity/crimeandjustice/bulletins/ crimeinenglandandwales/yearendingdecember2025

 

  1. It is really hard to keep track of these huge numbers.  I’m expert at it, but I initially made a small slip and was out by a factor of 20.[back]
  2. When I read accuracy figures in academic papers on machine learning, I often do the equivalent calculation for a trivial classifier … as in this case, it is often no worse than the algorithm.[back]

The Abomination of AI – part 7 – what can we do?

It all seems too big, requiring national and international responses.  But we can make a difference using appropriately chosen small AI (including none). Plus, this good use of AI is good for business too.

This is the seventh of a series of blogs based on my keynote “The abomination of AI” at ICoSCI 2026.  Each has an accompanying segment of the video and slides from the talk as well as detailed notes and references.  Section numbers refer to the full report which will be released in the final blog.   The slide thumbnails in the text correspond to the slides in the navigation panel below.  The presentation can be played below, or opened full screen. The full length video, complete slides and further information can be found at: https://alandix.com/academic/talks/ICOSCI-2026-abomination-of-AI/

Previously …

§1.  Every industry is driven by profits and power, but there is something about the nature of AI itself, which interacts with the nature of market forces in the world that is problematic and is different from other technologies.

§2.  Can any technology be neutral?  AI can be used for good purposes, such as advances in healthcare.  It can also have bad outcomes such as bias in the criminal justice system or online exploitative pornography.  Perhaps most often it is creating the frivolous or even ugly.

§3.  The obvious impact of AI is in the things it does directly. Some technologies also change the very nature of society, affecting even those who do not use them. Cars are an obvious example.  AI is also such a technology.

§4.  Doomsayers worry about the point when AI becomes sentient, outgrowing its creators.  The real danger is more insidious: the massive financial and human impacts of AI seem almost obscene.

§5.   Network externalities, the way one person’s use of AI and digital tech changes its value for others, creates positive feedback loops, leading to runaway growth and emergent monopolies, the nemesis of free markets. This the very nature of digital technology and AI breaks free markets leading to runaway inequality, even with the best intentions of industry … but some tech companies further exploit these effects.

§6.   Runaway growth of AI is not painless – opportunity costs of investment and human costs of lost jobs.  Gains may be transitory – buy-now-pay-later tech risk tying users into spiralling costs.

 

7.  What can we do?

These issues all seem too big, frighteningly so.   So what can you do?

You might be a policy maker, or on a government committee that’s advising governments.  If so, you might be in position to make changes at that scale.  Most do not have such high-level influence, but there are changes you can make within your own spheres to help ameliorate some of these potential dangers.  I’ll focus on the UX designer or AI developer, but some of the ideas are ones you might be able to adopt in your own personal use or within an organisation.

 

7.1  No AI

One option is to simply, say “no” to AI.

If you are a designer, ask, “do I need AI at all in my project?”  Of course, everybody now expects every product to say ‘AI powered’, so you may not be able to avoid AI altogether, but it could be very simple AI.  But do ask whether you need it at all, if you don’t, why are you feeling you need to use it?

 

7.2  Small AI

If you do decide to use AI, you can opt for small AI.

If you are using language models or other generative AI, you might use smaller models, the kind that have been deliberately designed to be able to run on less powerful hardware. There are many good reasons to do this.  Indeed, Apple have been encouraging smaller AI because they want the AI to run on people’s personal devices, not just in the cloud.  This is because privacy is a strong part of the company brand.

Where it is appropriate you could use traditional AI, which is usually much smaller in terms of memory and computation.

Purely from a technical perspective, there are some really interesting research challenges in this area, both in terms of human computer interaction (see my 2024 talk on ‘Patient Interaction’ [Dx24]) and also pure technical AI.

Images: [Di22,Sa23,Dx25,DS25]

You’ll have seen some of the modifications of algorithms that are transforming this landscape including open LLaMa [ZD22,TH23], LORA [HS22] and LiGO [WP23].  DeepSeek [DS24,DS25] made waves when US export restrictions on NVIDIA chips forced Chinese innovators to adopt a far leaner and smarter approach to LLM development [LF24].  Debatably DeepSeek’s learning might have piggybacked off some of the other LLMs [We25b], but certainly at execution time it used far less resources than other LLMs at time. Now other LLMs have adopted lessons from DeepSeek, and all are looking to perform more efficiently, so there is small shift in thinking away from a simplistic ‘bigger is better’ approach [Hi20].

 

7.3  When to use AI

There is also a choice about when to use AI.

The most obvious use of AI is at execution time in a user interface or delivered application as part of the service provided.  This can of course be small AI or no AI at all even.

But you can also use AI at design time.  You might use big AI to create small AI for the delivered system, for example using techniques to compress the model.  You can also use AI as part of the UX process to critique a user interface, create rapid prototypes, or propose design ideas [Dx26b].  In addition, AI-based coding tools can create AI-free (or low-AI) systems.

Crucially, if you use big AI to help create a (smaller) product, it effectively gets reused again and again and again and again.   So it’s less expensive – both moneywise expensive to a company, but also less expensive in terms of its impact on the environment and society.

In fact, this is really powerful use of AI. For instance, one of the things I argue elsewhere is that AI critiques of UIs will be far better for accessibility than even the best designers.  This is in part because it is really hard for us to think about even obvious diversity such as what’s it like to be blind or deaf, or have a physical disability, an automated design tool can check a concept or prototype against vast numbers of different types of perceptual and physical abilities as well as combinations.  Even more important, it is almost impossible for us to imagine what it’s like to be somebody who thinks differently, for example somewhere distant from ourselves in a neurodivergent space .  I don’t think AI will be good at this, but I think it will be better than we are.

 

7.4  How to use AI

Finally, if you are using AI think carefully about the kind of AI you are going to use, and how to incorporate it into a system.  For many years I’ve talked about appropriate intelligence, most often in relation to AI error and the need to design human–AI systems that together are robust and effective, not focusing in the AI accuracy alone [DB00,BD23].  However, the same lesson can be applied more broadly.

Often, we think about human interaction with AI, but it can be useful to think of a three-way interaction with human(s), AI and plain-old computing – that is hand coded algorithms or classic AI. Now look at each kind of AI that you are thinking of using and ask what is it good for?

What kind of things do I mean? One of the problems with traditional AI is that it was good with hard-nosed rules, but much more problematic with fuzzy things.  There are various techniques such as Bayesian methods and fuzzy logic, but they require you to formalize the fuzziness into probabilities or similar functions.  Amongst other things this limited various forms of natural language understanding and common-sense reasoning

Of course, large language models are really good at dealing with the nuances of language, but LLMs are less good when they try to be very precise, not least because they keep hallucinating!

So as you design for AI, ask what is it good for, how can I use it most appropriately?

As an example of the appropriate use of AI,  my wife uses an app from “The Doctor’s Kitchen” (https://www.thedoctorskitchen.com) to help keep track of the health value of food.

You take a photo of a plate of food before you eat it and the app creates a report on its nutritional value: how much fibre and protein it contains and its inflammation index.  Is it likely to be good for you or bad?

You could imagine doing this by writing a complicated prompt to an LLM or train a deep learning algorithm with lots of plates of food and hand-curated reports.  The app does not work like that.

What it does is to use image processing AI to analyse the plate and work out what food is on it.  Indeed, you can press an edit button to see what it thinks you’ve got on your plate, and, if it’s got it wrong, edit it.  One assumes that a log of these edits helps to further train the image processing AI.

So the AI has been used for the fuzzy part of the task, working out that there are crisps on the plate but a no cake. It even manages to recognise hummus and estimate how much.  It is amazingly good, but does sometimes get things wrong in terms of the volume or even what is there; however, when that happens you can easily see and correct it.

So this is using AI for the fuzzy bit.

This table of contents will then go into a standard algorithm that uses tables of nutritional values to look up how much protein is in, say, 10 grams of almonds, add this up for the plate and hence generate the final nutritional report.

AI and traditional computing together — combining the two using the best aspects of each.

Note that this is more explainable, you know, what’s going on.

It is also more flexible in terms of you can choose to enhance different components and change others.

There is also less vendor tie in.  This is not removed entirely as you need a new AI to be retrained.  However, it is easier to swap just the food recognition part than if the whole system were in a single AI.

This is good from a business point of view, but it also means you are using less large-scale AI with its environmental, financial and democratically damaging effects, when you could be using simpler computation.

Coming next …

Part 8 – summary and recap

This final post will recap what we’ve learnt about the runaway nature of the AI industry, how it undermines free markets, and how we can make a difference. The core question is not what can AI do, but what should AI do?

 

Update

Since the talk in January I read about A.T.L.A.S. (Adaptive Test-time Learning and Autonomous Specialization) an AI coding system built by a business student Johnathon Tigges wanting to challenge the assumption that “only the biggest players can build meaningful things” [Ti26] .  It is able to outcompete the big coding agents by being clever – rather than just throwing a problem into a big code-optimised LLMs and asking for a solution, it uses AI to generate lots of potential code fragments and tests them, using the best to further refine the AI model … all on a consume GPU.  A lovely example of smart use of AI!  For a more detailed description see Sebastian Buzdugan’s Medium story about it [Bu26].

References

[BD23] Alba Bisante, Alan Dix, Emanuele Panizzi, and Stefano Zeppieri (2023). To err is AI.In Proceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter, pp.1–11. https://dsoi.org/10.1145/3605390.3605414

[Bu26] Sebastian Buzdugan (2026). Why a $500 GPU Can Beat Claude Sonnet on Coding Benchmarks. Medium. Mar 28, 2026. https://medium.com/@sebuzdugan/why-a-500-gpu-can-beat-claude-sonnet-on-coding-benchmarks-6c8169ffe4fe

[DS24]  DeepSeek-AI (2024).  DeepSeek-V3 Technical Report. arXiv preprint. 27 Dec 2024. https://arxiv.org/abs/2412.19437

[DS25]  DeepSeek-AI (2025).  DeepSeek-V3. GitHub Repository. Release v1.0.0. 27 Jun 2025. https://github.com/deepseek-ai/DeepSeek-V3

[Di22] Dickson, B. (2022). Can large language models be democratized? TechTalk,-May 16, 2022. https://bdtechtalks.com/2022/05/16/opt-175b-large-language-models/

[DB00] A. Dix, R. Beale and A. Wood (2000).  Architectures to make Simple Visualisations using Simple Systems.  Proceedings of Advanced Visual Interfaces – AVI2000, ACM Press, pp. 51-60.  https://www.alandix.com/academic/papers/avi2000/

[Dx24] Alan Dix (2024). Patient Interaction – for well-being, productivity and sustainability. FUSION 2024, Kuala Lumpur, Malaysia, 28 Sept. 2024. https://www.alandix.com/academic/talks/FUSION2024/

[Dx25]  Dix, A. (2025). Artificial Intelligence – Humans at the Heart of Algorithms, 2nd Edition, Chapman and Hall.  https://alandix.com/aibook/

[bibitelm name=Dx26b] A. Dix. (2026). AI for Human–Computer Interaction. CRC Press, in press. https://alandix.com/ai4hci/

[Hi20] Hinton, G. (2020). Extrapolating the spectacular performance of GPT3 into the future suggests that the answer to life, the universe and everything is just 4.398 trillion parameters. Twitter (now X), Jun 10, 2020. https://x.com/geoffreyhinton/status/1270814602931187715

[HS22]  Hu, E. J., Shen, Y., et al. (2022). LoRa: Low-rank adaptation of large language models. ICLR, 1(2), 3. https://arxiv.org/abs/2106.09685

[LF24] Liu, A., Feng, B., et al. (2024). Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437. https://arxiv.org/abs/2412.19437

[Sa23] Sajid, H. (2023).  Artificial Intelligence: Can You Build Large Language Models Like ChatGPT At Half Cost? Unite.ai, May 11, 2023.  https://www.unite.ai/can-you-build-large-language-models-like-chatgpt-at-half-cost/

[Ti26]  Johnathon Tigges (2026).  A.T.L.A.S. – Adaptive Test-time Learning and Autonomous Specialization. GitHub. https://github.com/itigges22/ATLAS

[TH23] Touvron, H., Martin, L., et al. (2023). Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. https://arxiv.org/abs/2307.09288

[WP23] Wang, P., Panda, R., et al. (2023). Learning to grow pretrained models for efficient transformer training. arXiv preprint.  https://arxiv.org/abs/2303.00980

[We25b] Werner, J. (2025). Did DeepSeek Copy Off Of OpenAI? And What Is Distillation? Forbes, Jan 30, 2025. https://www.forbes.com/sites/johnwerner/2025/01/30/did-deepseek-copy-off-of-openai-and-what-is-distillation/

[ZD22]  Zhang, S., Diab, M. and Zettlemoyer, L. (2022). Democratizing access to large-scale language models with OPT-175B. Meta Research Blog, May 3, 2022. https://ai.meta.com/blog/democratizing-access-to-large-scale-language-models-with-opt-175b/

 

 

 

The Abomination of AI – part 6 – should we worry?

Runaway growth of AI is not painless – opportunity costs of investment and human costs of lost jobs.  Gains may be transitory – buy-now-pay-later tech risk tying users into spiralling costs.

This is the sixth of a series of blogs based on my keynote “The abomination of AI” at ICoSCI 2026.  Each has an accompanying segment of the video and slides from the talk as well as detailed notes and references.  Section numbers refer to the full report which will be released in the final blog.   The slide thumbnails in the text correspond to the slides in the navigation panel below.  The presentation can be played below, or opened full screen. The full length video, complete slides and further information can be found at: https://alandix.com/academic/talks/ICOSCI-2026-abomination-of-AI/

Previously …

§1.  Every industry is driven by profits and power, but there is something about the nature of AI itself, which interacts with the nature of market forces in the world that is problematic and is different from other technologies.

§2.  Can any technology be neutral?  AI can be used for good purposes, such as advances in healthcare.  It can also have bad outcomes such as bias in the criminal justice system or online exploitative pornography.  Perhaps most often it is creating the frivolous or even ugly.

§3.  The obvious impact of AI is in the things it does directly. Some technologies also change the very nature of society, affecting even those who do not use them. Cars are an obvious example.  AI is also such a technology.

§4.  Doomsayers worry about the point when AI becomes sentient, outgrowing its creators.  The real danger is more insidious: the massive financial and human impacts of AI seem almost obscene.

§5 .   Network externalities, the way one person’s use of AI and digital tech changes its value for others, creates positive feedback loops, leading to runaway growth and emergent monopolies, the nemesis of free markets. This the very nature of digital technology and AI breaks free markets leading to runaway inequality, even with the best intentions of industry … but some tech companies further exploit these effects.

6.  Should we worry?

6.1  Jobs and power

Image: Scottish Government, CC BY 2.0. https://commons.wikimedia.org/wiki/File:One_of_the_typing_pools_%283829002585%29.jpg

Does this matter?  So what if a small number of companies have notional multi-trillion balance sheets and are engaged in runaway development in the digital realm, so long as it doesn’t affect the real world.  But, of course, it does; the digital domain is leaking into the physical domain.

Of course, technology and automation have long had massive impact on society, with a gradual shift from human expertise to financial capital.  This certainly dates back to the 19th century or late 18th century with the rise of the industrial revolution.  Of course, at that time, humans were still needed, but they went from being the expert weavers and spinners, to the those (including young children) who merely tended the machines, monitoring and knotting broken threads, and occasionally losing arms in the moving parts.  So it wasn’t that the humans were unnecessary, but they merely fed the machines.

Moving into the 20th century, machines replaced humans more completely with fully automated production lines and industrial robots, although of course still with humans cleaning up between them.  In many parts of the global north skilled manual work has all but disappeared, with a combination of automation and out-sourcing.

To some extent the impact of automation initially hit traditional male jobs, but in the latter half of 20th century, from about the early seventies on this also hit clerical roles.  Until then every big organisation would have had a typing pool.  My own mother was for many years a typist at first in the War Department throughout the Second World War, and then the Inland Revenue. These typing pools, consisted of ranks of people, usually women, typing sometimes from dictation and shorthand, and sometimes other forms of handwriting.  Word processors basically destroyed the typing pool. Whereas previously, managers would dictate letters and reports to a secretary who would then type it up,  with the word processor, despite initial resistance, they would type directly themselves.  Of course, after initial diversity, Microsoft Word soon became dominant – another emergent monopoly, although now matched by Google Workspace, with about 90% world share between them.

So, in general, skilled working class jobs have been destroyed by automation leaving a growing underclass with minimum wage jobs and gig work.

What we’re seeing now is that the mid-range intellectual work is starting to be eaten by AI [BSI25,].

You may have seen the MIT report that found that while many companies were investing heavily in AI, around 95% of the projects were considered to be failing or underperforming [CP25].  So, effective job substitution is not yet universal, but in some areas such as computing many of the lower range of the roles, typical graduate first jobs, are being replaced by AI.  Until recently the expert developer would have several junior developers who do the grunt work; now this is done by AI.  Similar pictures are emerging in advertising, aspects of finance, and some of the large management consultancies [Ko25,Sw26,IPA26,KM26,Pa26].  In the UK, and even more so in other parts of the world, there are strong pushes to use AI more extensively within government, not least on the assumption that it will improve efficiency [GUK25,Dx26].

There’s a critical issue about who’s in control.  Think about the road network.  In the UK there are some private roads and also some toll roads, but the majority of roads, including almost all in urban areas, are owned by the local authority or central government.  That is, the vast majority of the road network is local in terms of its maintenance and control.  Imagine if the road network was instead owned by two or three major companies based in the west coast of America.  Imagine if every road in Malaysia, every road in Indonesia, as well as every road in UK was owned by those two or three companies  half the world away.  If there’s a pothole in the road, it is those companies to whom you have to complain.  Perhaps they decide to charge you to use the road outside your house or decide to remove the roads entirely if they’re in dispute with you or your government.

That’s exactly the direction we are moving with AI and public services.  Even assuming the best intention of the big AI players, this does feel worrying.  And, of course, this isn’t a choice you can make or not.  Just like cars and roads, once AI is embedded into public service everything orients around it.

Returning to the changes in employment, once we lose the entry stage jobs, there’s a clear problem for the people who would have had them.  All the graduates from our universities, who would have been going into those jobs, are being hit, and, in many countries, on top of large student loans [DoE25,Pa25,Pa26].  This is creating a class of people who are underemployed, inexperienced, and quite likely disaffected with society.  Think of this in the light of the rise of extremism across the world.  Often this is dismissed as a problem of the uneducated, but here we are adding a vast number of highly educated people, who are disaffected in society, further spreading those extreme messages.

 

6.2  Locked into AI

This is also a problem within an organization.  If you are not employing those early career people, what happens in five or ten years’ time as your more experienced employees want to move up the organization?  How do you fill in those gaps if you haven’t been training people?

This might be something we need to address as universities, training people effectively to higher and higher levels so that they can jump in at that point.

Or the organisation can simply find they need more AI – what they certainly can’t do is just turn off the AI because they haven’t got the people with the experience in order to do the jobs anymore.  They have become locked in as a company to the use of AI.

This is also true of data.  Microsoft have a guide entitled, “Prepare your data for AI” [Ms26].  The use of AI is not coming for free, but needs a rearrangement of data for it.  One does wonder if the same effort making data ready for AI could be better spent making it ready for simpler statistical algorithms.

However, let’s assume you have put effort into reorienting your whole data around AI. Your systems rapidly become AI dependent – your recent information and new data has become deeply embedded into the AI itself in ways that are often opaque.

Once you have bought into an AI system, you can’t just say, “well, let’s just swap to something else”.  It’s difficult even to swap vendors once it is that embedded.

 

6.3  Buy now … pay later

If you have a loan with interest, you know you have to pay for it eventually, but things can be less obvious.  When I was little, my mum had a Kays catalogue, a sort of the 1960s  equivalent of internet selling [WA17].  Its pages were full of big colour pictures of clothes, white goods, toys, etc. …it was usually the toys I was looking at.  You could buy things from the catalogue and could pay over 20 weeks with no interest, but of course the things cost more than if you had the ready cash buy them at a shop.  So effectively you were paying extra.

AI currently is in that ‘buy now pay later’ mode, both globally and locally for individuals.  AI growth is funded by massive investment (as we discussed absolutely huge) possibly more than ever before except perhaps the South Sea Bubble.  However, the income doesn’t in any way cover the costs, and the ratio between expected income and investment is way out of kilter for what you’d expect even for a digital company, let alone for a physical one.

So how do the books add up?

If you’re an accountant in the company or if you’re an investment manager, what are you thinking about as, as you see these figures?  Why don’t you sound the alarm?  The reason is you are thinking that in the future you will have more money from that stream.  In early digital companies, like Amazon, you did that because you assumed you were going have a bigger market, the number of people who would use it would grow.

But AI already has lots of users, so instead you have two options.  The first is to find ways to make what you produce more cheaply, which is happening to some extent already. However, you don’t want it to get too cheap otherwise competitors can enter the market.  The alternative, and your only real option, to recoup your investment by charging more or getting the same customers to use more. Either way, it is the customer who pays in the end!

This is no secret.  Fortune magazine said that OpenAI’s business plan relies on “what amounts to a bet on dominance” [Sm25].  That is, in putting in all that investment, what investors are hoping is that the company will become the AI company in an area that everybody is tied into.  And then of course they can charge pretty much what they like: a buy now – pay later world. We’re using AI now, but the cost is going to come later on.

 

Coming next …

Part 7 – what can we do?

It all seems too big, requiring national and international responses.  But we can make a difference using appropriately chosen small AI (including none). Plus, this good use of AI is good for business too.

 

Update.

Since the talk, I read about a woman who had developed a close relationship with a chatbot hosted on a version of ChatGPT that is due to be retired [He26]. While she could probably export her chat history and use that to reinitialise the new version of the software, it would not be the same.  We will soon start to hear similar stories for business and public systems as tech companies have not had a good record of backward compatibility, and this is all but impossible with current LLMs.

Also, in late January, OpenClaw was released [OC26].  This highlighted the way current payment models do not reflect the actual cost of use. OpenClaw (originally called Clawdbot) is an open-source GitHub project that used the Claude API to create an automated assistant coordinating web and desktop resources.  Within days of the launch Anthropic enforced a long-standing, but unenforced, restriction on third-party use of its API and blocked OpenClaw for most user accounts including its $200 Max account.  This was because these accounts come with monthly usage limits, and OpenClawd encouraged full use of those limits.  However, the business model of even premium accounts depends on users NOT using their monthly allowances. OpenClawd encouraged full use of those limits, thus exposing.the true cost of the full use vastly exceeded the subscriptions [Ba26] .

 

 

References

[Ba26] Novy Baf (2026).  Anthropic Pushed Its Most Loyal Developers Straight Into OpenAI’s arms. OpenAI Didn’t Even Have to Ask.  The Nov TEch, 2nd Mar 2026.  https://www.thenovtech.com/p/anthropic-pushed-its-most-loyal-developers

[BSI25] British Standards Institution (2025). Evolving Together: AI, automation and  building the skilled  workforce of the future.  https://www.bsigroup.com/en-GB/insights-and-media/insights/whitepapers/evolving-together-flourishing-in-the-ai-workforce/

[Dx26] A. Dix. (2026). Beyond the Algorithm: Designing Human-Centric Public Service with AI. Talk at Service Design for Public Sector Spotlight Seminar series of challenges and opportunities between Design Cultures and Public Sector, Sapienza, University of Rome + Online, 4th February 2026. https://alandix.com/academic/talks/Rome-Seminar-Feb-2026/

[DoE25] Department of Education (2025). The impact of AI on UK jobs and training. November 2023.  https://www.gov.uk/government/publications/the-impact-of-ai-on-uk-jobs-and-training

[CP25]  Aditya Challapally, Chris Pease, Ramesh Raskar, Pradyumna Chari (2025). The GenAI Divide: State of AI in Business 2025. MIT NANDA, July 2025. https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf

[GUK25] Gov.UK (2025). AI to power national renewal as government announces billions of additional investment and new plans to boost UK businesses, jobs and innovation. Press release from Department for Science, Innovation and Technology, HM Treasury, Wales Office, The Rt Hon Liz Kendall MP, The Rt Hon Rachel Reeves MP and The Rt Hon Jo Stevens MP.  20 November 2025. https://www.gov.uk/government/news/ai-to-power-national-renewal-as-government-announces-billions-of-additional-investment-and-new-plans-to-boost-uk-businesses-jobs-and-innovation

[He26]  Stephanie Hegarty (2026). Rae fell for a chatbot called Barry, but their love might die when ChatGPT-4o is switched off. BBC News, 14 February 2026. https://www.bbc.co.uk/news/articles/crl43dxwwy9o

[IPA26] IPA (2026). IPA Agency Census 2025 shows workforce declines while diversity improves.  Institute of Practitioners in Advertising. 11 February 2026. https://ipa.co.uk/news/agency-census-2025/

[KM26] Lucy Knight and Sumaiya Motara (2026). The big AI job swap: why white-collar workers are ditching their careers. The Guardian,  11 Feb 2026. https://www.theguardian.com/technology/2026/feb/11/big-ai-job-swap-white-collar-workers-ditching-their-careers

[Ko25] Saskia Koopman (2025).  Big Four slash graduate jobs as AI takes on entry level work. City AM, 23 June 2025. https://www.cityam.com/big-four-slash-graduate-jobs-as-ai-takes-on-entry-level-work/

[Ms26] Microsoft (2026). Prepare your data for AI. Dated 20/1/2026.  https://learn.microsoft.com/en-gb/power-bi/create-reports/copilot-prepare-data-ai

[OC26] OpenClaw (2026).  OpenClaw — Personal AI Assistant. https://github.com/openclaw/openclaw

[Pa25] Joanna Partridge (2025). Gen Z faces ‘job-pocalypse’ as global firms prioritise AI over new hires, report says. The Guardian,  9 Oct 2025. https://www.theguardian.com/money/2025/oct/09/gen-z-face-job-pocalypse-as-global-firms-prioritise-ai-over-new-hires-report-says

[Pa26] Joanna Partridge (2026). More than a quarter of Britons say they fear losing jobs to AI in next five years. The Guardian,  25 Jan 2026. https://www.theguardian.com/business/2026/jan/25/more-than-quarter-britons-fear-losing-jobs-ai-next-five-years

[Sm25]  Dave Smith (2025). OpenAI says it plans to report stunning annual losses through 2028—and then turn wildly profitable just two years later . Fortune, November 12, 2025. https://fortune.com/2025/11/12/openai-cash-burn-rate-annual-losses-2028-profitable-2030-financial-documents/

[Sw26] Mark Sweney (2026). UK ad agencies undergo their biggest exodus of staff as AI threatens industry. The Guardian,  13 Feb 2026. https://www.theguardian.com/media/2026/feb/13/uk-ad-agencies-biggest-annual-exodus-of-staff-ai-threatens-industry

[WA17]  Worcestershire Archive and Archaeology Service (2017).  Christmas and Kays.  Explore the Past. 19th December 2017. https://www.explorethepast.co.uk/2017/12/christmas-and-kays/

 

 

 

 

The Abomination of AI – part 5 – digital and AI breaks market economics

The very nature of digital technology and AI breaks free markets leading to runaway inequality, even with the best intentions of industry … but some tech companies further exploit these effects.

This is the fifth of a series of blogs based on my keynote “The abomination of AI” at ICoSCI 2026.  Each has an accompanying segment of the video and slides from the talk as well as detailed notes and references.  Section numbers refer to the full report which will be released in the final blog.   The slide thumbnails in the text correspond to the slides in the navigation panel below.  The presentation can be played below, or opened full screen. The full length video, complete slides and further information can be found at: https://alandix.com/academic/talks/ICOSCI-2026-abomination-of-AI/

Previously …

§1.  Every industry is driven by profits and power, but there is something about the nature of AI itself, which interacts with the nature of market forces in the world that is problematic and is different from other technologies.

§2.  Can any technology be neutral?  AI can be used for good purposes, such as advances in healthcare.  It can also have bad outcomes such as bias in the criminal justice system or online exploitative pornography.  Perhaps most often it is creating the frivolous or even ugly.

§3.  The obvious impact of AI is in the things it does directly. Some technologies also change the very nature of society, affecting even those who do not use them. Cars are an obvious example.  AI is also such a technology.

§4.  Doomsayers worry about the point when AI becomes sentient, outgrowing its creators.  The real danger is more insidious: the massive financial and human impacts of AI seem almost obscene.

§5.1.  §5.2.   Network externalities, the way one person’s use of AI and digital tech changes its value for others, creates positive feedback loops, leading to runaway growth and emergent monopolies, the nemesis of free markets.

5.3  Digital and AI breaks market economics

We have seen that digital technology breaks market economics.  Yet this is what our whole world is built on.  Even countries that are not fully market economies, such as China, often rely upon market economics extensively, both internally and globally.  Indeed, market economics has driven nearly all of late 20th century trade and much before that, including the industrial revolution.  Of course, market economics has not been good for everything and certainly not for everybody, but it’s had elements of success.  But now it is broken.

Digital technology breaks market economics and AI makes it worse.

One of the ways that AI makes this worse is that the new AI, large language models and the like, are built on big data and big computation.  This means that they require big business … really big business, a business that’s bigger than most countries, in order to get in the game.

But once you’re in that game,  you have a large volume of people using your systems, generating more data, which can be leveraged to encourage more people to give you data.  For example, this can include governments giving certain companies, sometimes exclusive, access to public health data.  And, of course, this then means the successful companies have the money to invest in more data centres to process that data.

Here we have yet another positive feedback loop exacerbated by the huge computational and data needs of AI.

And of course that has effects, not least the environmental impact as seen in the data about energy and water use.

But also, when companies are so big, there is the potential democratic deficit. This was very evident in America during Trump’s inauguration with the ‘tech bros’ surrounding him.  Although there have been some fallouts between some of them since, the power of big business was very evident.  And that’s even in the US, smaller countries really have to struggle because the businesses are bigger than they are.

 

5.4  With the best will in the world …

So digital, by its very nature, leads to runaway inequality, which AI intensifies.  You have to work hard to stop that happening.

This doesn’t mean you can’t.  As we discussed, in our body’s immune system, we have positive feedback loops that are important to fight infection.  These would lead to autoimmune diseases if unchecked, but they are moderated by negative feedback loops that control them.  Similarly, the macro-economic feedback loops of digital technology and AI are not unstoppable, but the natural progression is just for them to keep on going.

Now this potentially runaway growth of AI happens even if everybody plays nice.  It is not about evil owners of AI companies who are trying to control the world.  With the best will in the world, this will happen.

But, of course, they don’t always have the best will in the world.

Some of the problem is baked into our commercial legal systems.  In the UK, if you are on the board of directors of company, your legal responsibility is to your shareholders, which typically means profit maximisation.  So even if you might have liked to do something better for society or the world, you are legally bound to do the thing that maximises profits.

So, the leaders of big AI are almost forced not to do the right thing, but it varies on the individuals how much they lean into that.

 

5.5  … Or not

Facebook internal strategy document quoted by Cory Doctorow  [D025]

In 2025 Meta, the owner of Facebook, was in the midst of an anti-trust case in the US regarding their takeover of Instagram in the early 2010s [Da25].  The US Government eventually lost their case against Meta, due largely to the emergence of TikTok as a competitor in the meantime  However, as part of the case various internal Facebook documents came into the public domain.  Cory Doctorow, the open software campaigner, quotes from one internal strategy document, which showed that Mark Zuckerberg and Facebook understood precisely the role of emergent digital monopolies:

“Social networks have two stable equilibria: either everyone uses them, or no-one uses them.”

“… The binary nature of social networks implies that there should exist a tipping point, ie some critical mass of adoption, above which a network will organically grow, and below which it will shrink.” [Do25]

Other emails show that this understanding did lead to very deliberate attempts stifle Instagram’s growth [Da25].  That is Facebook was well aware of network effects and the presence of tipping points, and prepared to use techniques to ensure that they are on the side of that critical mass that they wanted to be.

These statements were made in a largely pre-AI context (at least in the way it is understood today), with regard to the role of emergent monopolies for social media but are now, of course, intensified by AI.  I’m sure Meta was not and is not alone in being aware of these effects and being prepared use them.

Coming next …

Part 6 – should we worry?

Runaway growth of AI is not painless – opportunity costs of investment and human costs of lost jobs.  Gains may be transitory – buy-now-pay-later tech risk tying users into spiralling costs.

.

 

References

[Da25] David Dayen (2025). The Government Has Already Won the Meta Case. The American Prospect, April 16, 2025. https://prospect.org/2025/04/16/2025-04-16-government-already-won-meta-case-tiktok-ftc-zuckerberg/

[Do25] Cory Doctorow (2025).Mark Zuckerberg personally lost the Facebook antitrust case. Pluralistic. Apr 18, 2025. https://pluralistic.net/2025/04/18/chatty-zucky/

 

Minor bugs in major applications

Why do big applications such as MS Word and Gmail get new errors in heavily used parts that used to work?

Two have been annoying me lately.

Gmail’s disappearing send button

One is relatively minor, but Gmail seems to have forgotten how to work out the screen size so that when you create a new email the ‘send’ button is nearly invisible at the bottom of the page:

I know it is the send button, but the first time this happened, it was somewhat disconcerting – was I absolutely sure?  In fact the full button is there and if the email underneath is not too long  and you scroll to the end, the button appears:

… although then the menu at the top of the Gmail window half disappears!

There are similar left-to-right problems.  During one of its updates Gmail seems to have lost track of the exact window size, by a factor of about 20 pixels or so … but it used to be fine before.

And yes, I have reported this and the same problem happened with the send button at the bottom of the problem report form!

Word’s phantom changes

The second problem is with Microsoft Word and is far more difficult.  I commonly open an old document and select some text to copy into a new one I’m working on.  When I go to close the old document I get a file save dialogue:

I have changed nothing in the document … but then I have moments of doubt, especially if I’ve left it open for a while.

Perhaps I noticed a typo in the old document and forgot that I did it?  Perhaps I accidentally typed new text here that was intended for the new document?  I obviously don’t want to lose anything that was intentional, even if in the wrong place.  So would the safe thing be to save anyway?

But on the other hand perhaps I accidentally typed something into the old document, maybe even deleted a whole section without realising? I don’t want to lose anything important in the old document, nor even confusingly change it’s update time unnecessarily?

Here I’ve found no way to check whether this is a real change to the document or simply some sort of ghost changes to things Word keeps track of but are not really part of the document text that I see.

Poor coding, poor engineering or just AI?

In both cases the fault is repeatable, persistent and in some of the most commonly used parts of the systems.

The errors seem naive if accidental, or in each case if there was deliberate change to the algorithms for screen size or the document change flag, then it would have only taken a single use test by the developer to find and fix the problems.  Is this poor coding or the result of replacing developers with AI?

Once the error has happened, how does this get through regression testing?  I’d have thought that automated testing should pick up this sort of change.  Is there not any sort of periodic human sanity checking testing, or has this also been replaced with AI?

I’m sure my friend Nad, who is a master of architectural design and agile software processes engineering  would have something to say about this!

Poor coding, poor engineering or just AI?

Although both are relatively minor inconveniences in the grand scheme of things, especially in a world where so many live in fear for their lives.  Yet the effects are still major.  These big products are used by billions.  Each minor friction and inconvenience adds up to a huge global cost in terms of added stress and lost productivity.

Of course, I am not going to stop using Gmail or Word because of this.  Perversely, because these are standard products used by so many, users are unlikely change, so there is little incentive for the tech companies to avoid these huge costs to society at large … issues not unrelated to my current Abomination of AI blog seres!

 

 

Not at CHI – points of view and reporting standards

For various reasons I won’t be at CHI1 in Barcelona, but I’d like to highlight two events I would have been part of had I been there.

One is more practice focused, the CHI 2026 UXR POV Workshop: Developing an AI-Powered UX Research Point of View (POV). (Thurs, 16th April, 14:15 – 15:45 CEST & 16:30 – 18:00 CEST).  This workshop builds on a strand of work driven by Renée Barsoum, Huseyin Dogan and Stephen Griff, that seeks to create tools in the form of playcards, to help understand the wide range of POV of stakeholders during user research.  I’ve made a short video for the workshop and I’ll distribute that after the event (no spoilers!).

The second is more research focused, a panel Does Peer Review Need to Change? A Panel on Reporting Standards and Checklists in the Age of AI  (Mon, 13th April, 14:15 – 15:45 CEST).  I’ll write a little more about this here, as I won’t be there in person, but these are my personal views the other panelists won’t necessarily agree!  If you are in Barcelona, go to the panel to see what they say.

Why reporting standards?

The reason for this panel is that CHI along with many conferences faces issues of workload and consistency of reviews.  The problems have been exacerbated by AI with both AI authored papers and AI reviews.

This is not just a problem for CHI.  Some years ago a computing conference2 needed to split its programme committee into two halves to deal with the volume of papers.  They were worried about consistency between the sub-committees, so had both sub-committees look at an overlapping sample of the papers.  They found that the two sub-committees agreed on a small number of very high quality papers and also a larger number of definite rejects.  However between these extremes, the large majority of the papers, the agreement was no higher than chance.

The CHI panel will describe the way that some other disciplines have tried to tackle this. This has been particularly important in medicine, where rigour in research is literally a life-or-death issue; there there are standards for different kinds of work, for example, the CONSORT standards for reporting randomised trials.  For other disciplines, including education and psychology, it can be hard to agree on definitions of quality, so they have often opted for standard ways to present results, making it easier for reviewers to focus on specific aspects, and hence lead to more consistent reviews.    Could a similar approach work in CHI?

A launch pad not a shackle

One of the reasons I was invited onto the panel was because of a CHI paper a few years ago HARK No More: On the Preregistration of CHI Experiments with Andy Cockburn and Carl Gutwin.  Although I was the card-carrying mathematician/statistician amongst the authors, I was also the one who kicked back slightly against strict demands for pre-registration. Instead I advocated using it as base point from which variations in data collection or analysis might be made, but where such variations needed to be clearly and strongly justified.

Similar caution is needed with standardised reporting more broadly.  Even with a range of different templates for different kinds of papers, there will always be work that doesn’t quite fit … I’m wondering what reporting standards for pictorials would look like!  So any process should allow variations and papers that completely step outside the accepted formats – otherwise the discipline will be frozen.  But, when the standards are not followed,  the discrepancies need to be justified and the bar set higher.

Democratising access

While the reasons for considering reporting standards emerge from issues such as workload and consistency of reviewing, the greatest benefits in my mind are far wider.  One of these is to help open up venues to those who are not part of in-groups.  During 40 years of publishing I have seen my own papers grow in length, with massively more references per paper, but not convinced that more recent work is more informative.

A year or two ago ACM surveyed members on acceptable uses of AI in academic publishing: should it be allowed at all, should it be allowable to include an AI in the author list?  After a point my answers became variants of a single theme:  “if we can’t tell the difference between AI bullshit and academic bullshit, AI is not the problem”.

CHI especially has a genre, a way of writing, which successful CHI authors learn and share through apprenticeship among their colleagues and students.  It is not that the substance doesn’t matter, but there are particular ways to say it as well. More formulaic paper structures would help authors focus on the content, rather than the form, making it easier for readers new to the community to draw out the critical information, and helping ensure that high quality work of authors new to the community is recognised.

Building the discipline

Academic venues are often rated based on their acceptance rates, with around 25% being the mark of a good venue. One of my comments in discussing the panel proposal (with which none of the other panelists agree!) was3:

a successful discipline has a 100% acceptance rate

Of course I don’t mean just accept everything, but rather that a 25% accept rate means 75% of work is effectively wasted. Now of course some of that will get published elsewhere, and not all work will be equally informative or innovative, but if academics and researchers are spending time on work that is effectively thrown away, that is a disaster.  Ideally every piece of research work should be of a form and standard that contributes to knowledge even if incrementally.  If this is not the case, then the discipline has a duty to educate researchers, especially early career researchers.

Reporting standards could help.  As well as retrospectively asking, “how do I write the work I have done better?”, they can be used prospectively to plan, “what work do I need to do in order to be able to write a paper of this form”.  That is templates for good reporting become templates for good research raising the overall quality of the discipline.

That seems a goal worth pursuing.

 

 

 

 

  1. CHI is the largest international conferences in human–computer interaction.[back]
  2. I can’t recall which conference this was, if you know please let me know.[back]
  3. I’m not entirely alone however, it  has been suggested that low acceptance rates might reduce the overall quality of the conference! B. Parhami, “Low Acceptance Rates of Conference Papers Considered Harmful” in Computer, vol. 49, no. 04, pp. 70-73, Apr. 2016, doi:10.1109/MC.2016.106.[back]

The Abomination of AI – part 4 – why is this happening?

Network externalities, the way one person’s use of AI and digital tech changes its value for others, creates positive feedback loops, leading to runaway growth and emergent monopolies, the nemesis of free markets.

This is the fourth of a series of blogs based on my keynote “The abomination of AI” at ICoSCI 2026.  Each has an accompanying segment of the video and slides from the talk as well as detailed notes and references.  Section numbers refer to the full report which will be released in the final blog.   The slide thumbnails in the text correspond to the slides in the navigation panel below.  The presentation can be played below, or opened full screen. The full length video, complete slides and further information can be found at: https://alandix.com/academic/talks/ICOSCI-2026-abomination-of-AI/

Previously …

§1.  Every industry is driven by profits and power, but there is something about the nature of AI itself, which interacts with the nature of market forces in the world that is problematic and is different from other technologies.

§2.  Can any technology be neutral?  AI can be used for good purposes, such as advances in healthcare.  It can also have bad outcomes such as bias in the criminal justice system or online exploitative pornography.  Perhaps most often it is creating the frivolous or even ugly.

§3.  The obvious impact of AI is in the things it does directly. Some technologies also change the very nature of society, affecting even those who do not use them. Cars are an obvious example.  AI is also such a technology.

§4.  Doomsayers worry about the point when AI becomes sentient, outgrowing its creators.  The real danger is more insidious: the massive financial and human impacts of AI seem almost obscene.

5  Why is this happening?

Why is this happening?  Well, we know the world is unequal, we know that the way free markets work mean that big companies often get economies of scale and get larger.  Is it just a natural thing that the same is happening with AI?

The answer is ‘no’, this is clear from the way AI stocks have performed in ways unlike any previous (legitimate) business.  There are elements of the normal operation of markets, but there are particular properties of digital technology in general and AI in particular that break aspects of market economics and lead to emergent monopolies.

These are due to positive feedback loops.  If you are from an engineering background you’ll know about these, but for those who aren’t we’ll take a little segue to look at positive feedback loops in general and then come back to how that applies in the economic sense.

 

5.1  Understanding feedback loops

Image: By Charles Schmitt – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=44338386

Feedback loops are everywhere.  The term simply means a process where the output in some way influences the future input.

One type is called a negative feedback loop, where a change in the input creates an effect that counters the change.  This can be engineered.  The classic example of this is the universal controller for a steam engine which keeps the engine running at a set speed. It consists of a set of steel balls on arms that spin as the engine spins.  The spinning of the arms means the steel balls rise due to centrifugal force, which then opens a valve reducing the pressure of the steam and hence the speed of the engine.  If the engine turns too slowly the balls fall, shutting the valve, increasing the steam pressure and hence the speed of the engine. Notice that this negative feedback effect leads to stability and balance,

In geometric shapes, smoothness is often an indication of negative feedback effect. A water drop is a smooth sphere because any small disturbance on the surface is counteracted by the surface tension, so any little dents fill back in again very rapidly.  once again the negative feedback loop creates a stable balance.

Positive feedback effects are when the output that is produced reinforces the original change. Think about a microphone being put near a speaker and the screech you get – that is a classic positive feedback effect – instability and extremes.  In physical structures  positive feedback effects often lead to sharp edges, like a snowflake. As the snowflake forms any sharp point attracts more ice formation and therefore grows.

Positive feedback often leads to tipping points where you get sudden changes and hysteresis where you have changes, which are hard to reverse.  Many climate change issues are of this kind.

This sounds as though positive feedback is a bad idea, but positive feedback can be really powerful.  Snowflakes are beautiful and they happen because of this!  In our bodies our immune system has some positive feedback cycles so that our bodies can react very rapidly.  Positive feedback often leads to exponential growth, and here the immune system can ramp up very quickly to fight infections.  However, useful positive feedback is usually wrapped around with controls incorporating negative feedback, which prevents the overall system becoming too extreme.

So it is not the positive feedbacks are bad per se and negative ones good.  However, it often feels as though they should be labelled the other way round, as positive feedback on its own tends to have these runaway effects and nobody wants a screeching microphone!

 

5.2  Network effects / externalities

Image: https://en.m.wikipedia.org/wiki/File:Microsoft_Office_Word_%282019%E2%80%93present%29.svg

Human society has many networks, some mediated by technology, some by our normal human relationships, such as networks of people that know one another, or business contacts.  Some of these networks are within a single group, others span several groups or kinds of people, for example the way teachers are connected with the children they teach, who in turn have parents, who may themselves know each other or talk to teachers at parent evenings.

Crucially, these human social networks change the value of digital goods.  To be precise they can change the value of other kinds of goods as well, but particularly digital ones.

If your colleagues all use Microsoft Word, then it makes more sense that you use Microsoft Word rather than, say Apple Pages.  I use PowerPoint for presentations largely because I often want to share slides with other people, even though I work on a Mac and Keynote might be better for some effects.

These are positive feedback cycles.  If I use something, it makes it of more value for you to use the same thing.  If you use it, it makes it of more value for me to use it.  Like all positive feedback, this leads to runaway effects.

Image: By Calistemon – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=127909261

Now for a little bit of economics.  Market economics assumes that markets are open; that is, it is possible for new businesses to start in an area and compete with existing ones, often leading to more efficient production.  The arguments for why market economics work (to the extent they do, and there’s limits to that) are predicated on this openness.  So, when monopolies happen, there are problems.

The most obvious kind of monopoly is a natural monopoly, where there is a single resource that is rare and only one or a small number of people control, just as many of the rare earths are found in China.  This is a natural phenomenon but can still can cause problems, hence worries about finding alternative sources or alternative materials.

Sometimes monopolies are engineered when a group of people in a sector come together to agree to keep the price high, or restrict output.  Most countries have antitrust laws or anti-monopoly laws, which try to ban this behaviour so that new players can come into a market and it doesn’t get controlled.

The trouble with network effects is that the positive feedback leads to a winner takes all situation.  The issue first hit the digital headlines back in 2001 concerning Microsoft’s bundling of Internet Explorer [LM01], but applies to much other software.  It is very hard to have even have two successful software products in an area, say Keynote and PowerPoint, let alone lots of different presentation software, because if one person uses it then it changes its value for everybody else.  This is an emergent monopoly.

Note, this is not because the manufacturers get together and to something underhand.  It is just a natural impact of digital technology, which you have to work hard to avoid.  There are ways of doing this: you can ensure open standards, for example; the fact that PPTX format is an open format, means it’s possible for other products to use it and interoperate with PowerPoint.

So there are ways you can counter the worst effects, but the natural impact is often for digital goods to give rise to these emergent monopolies.

Coming next …

Part 5 – digital and AI breaks market economics

The very nature of digital technology and AI breaks free markets leading to runaway inequality, even with the best intentions of industry … but some tech companies further exploit these effects.

.

 

References

[LM01] Liebowitz, S., and Margolis, S. (2001). Network effects and the Microsoft case. Chapter 6 in Dynamic competition and public policy: Technology, innovation, and antitrust issues, J. Ellig (ed.), pp.160–192. https://personal.utdallas.edu/~liebowit/netwext/ellig%20paper/ellig.htm

 

The Abomination of AI – part 3 – a different kind of apocalypse

Doomsayers worry about the point when AI becomes sentient, outgrowing its creators.  The real danger is more insidious: the massive financial and human impacts of AI seem almost obscene.

This is the third of a series of blogs based on my keynote “The abomination of AI” at ICoSCI 2026.  Each has an accompanying segment of the video and slides from the talk as well as detailed notes and references.  Section numbers refer to the full report which will be released in the final blog.   The slide thumbnails in the text correspond to the slides in the navigation panel below.  The presentation can be played below, or opened full screen. The full length video, complete slides and further information can be found at: https://alandix.com/academic/talks/ICOSCI-2026-abomination-of-AI/

Previously …

§1.  Every industry is driven by profits and power, but there is something about the nature of AI itself, which interacts with the nature of market forces in the world that is problematic and is different from other technologies.

§2.  Can any technology be neutral?  AI can be used for good purposes, such as advances in healthcare.  It can also have bad outcomes such as bias in the criminal justice system or online exploitative pornography.  Perhaps most often it is creating the frivolous or even ugly.

§3.  The obvious impact of AI is in the things it does directly. Some technologies also change the very nature of society, affecting even those who do not use them. Cars are an obvious example.  AI is also such a technology.

4.  A different kind of apocalypse

Image: [Do26]

The term ‘abomination’ conjures up apocalyptic images – something that defiles the sacred, often so malign and powerful that it either destroys the world itself, or through its influence drives others to mutual despoliation or annihilation.

I’m talking about this regarding the nature of AI, how AI changes society, so it is indeed a bit apocalyptic!

There are different kinds of apocalyptic views regarding AI.  The idea of the global war machine let loose on humanity, envisaged in Terminator, was distant science fiction when the films were first released, but sound prescient as the war in Ukraine is fought by drones hunting humans and, in Gaza and succeeding conflicts, Israel’s military decisions have been increasingly taken by AI [Be23,DM23].  While a Terminator-style takeover still feels pretty distant, an accidental conflagration much less so.

Many fears centre around  the singularity – the point at which AI becomes capable of designing itself leading to runaway developments of which we have no control. Related to this is the point at which AI becomes self-aware and maybe decides that humans are rivals to be squashed, or maybe simply pushed aside as irrelevant.  An ex-OpenAI expert Daniel Kokotajlo recently announced that true AGI (artificial general intelligence) was not as imminent as first envisaged, and gave the world a reprieve until 2034 [Do26] – well we can all heave a sigh of relief.

While this form of disaster scenario should not be ignored entirely, there are more immediate worries.  Without being sentient or omnipotent AI is transforming the world.

 

4.1  The end comes quietly,

Disaster scenarios make good Hollywood movies, but often the end comes quietly.  In the past some empires and civilisations have collapsed entirely, but more often there is a slow decay, a series of more minor crises and a gradual withering from within.

It is this more insidious impact of AI that concerns me.

 

4.2  Facts and figures

Let’s consider some facts and figures about AI.  Some involve estimates that have varying levels of  confidence, but altogether paint a picture.

First is the announcement that Tesla had negotiated a $1 trillion salary settlement with Elon Musk [JM25].  This is a 10 year deal, and a lot of it is in stocks and shares, so you could argue whether it’s real money or not, but it is still substantial.  Or rather not just substantial, but enormous.  This is a trillion dollars, not a million, nor even a billion, but a million million.  A trillion dollars is $3,000 for each man, woman, and child in the US or, over 10 years, about $300 per year.

I first studied economics in the late 1970s.  All societies are unequal, and there is a well-known rule that the high-end tail of incomes in western countries follow an approximate 1/XK rule (with K~2), where the number of earners for a particular income is inversely proportional to the square of the income, or smaller [Mi78,].  This means that there are few people with vast amounts and lots of people with much less. But the people with huge amounts were few enough that they didn’t make a huge difference to the overall picture.  If the income of the rich were to have been spread over all of society it made almost no difference.  Overall the volume of money was in the middle income range.

This has important implications.  Market economies orient themselves to make the most efficient use of resources where the most money is, that is the middle income range.  Now that’s bad news if you’re rich, because your money gets used less efficiently – each dollar doesn’t buy as much as it might, but you’re rich enough anyway.  This is more of a problem if you’re really poor, as goods for the poorest are not optimised to the same extent as for the middle.

The middle-income area has also driven taxation policy.  In the past if you placed a large tax on the richest, it might make people feel it was fairer, but had a relatively small impact on total taxes gathered as the volume of money was in the middle income ranges.

This rule held throughout the latter half of the 20th century, but has changed.  We are witnessing a level of inequality now that hasn’t been seen probably for hundreds of years, possibly thousands, maybe even since the age of the ancient empires. This is quite surprising to say the least.

In the UK, a recent report that said that, while less than 10% of energy was currently being used in data centres, this is due to rise by 600% (six times greater) by 2050 [Cr25,LA25].  That’s a lot, even if you take into account changes in other forms of energy use – a big percentage of UK energy use is going to be in data centres [VG26]. This is a global phenomenon, in Australia electricity use in data centres is projected to exceed use by electric cars by 2030 [ST25].

Another recent report projected a $6.7 trillion investment in data centres globally in the next five years [NG25].  That’s about $1.3 trillion a year.  At nearly the same time, at COP 25, they were trying to get countries to agree to a $300 billion (not trillion, billion) budget to help the countries worst hit by climate change; places such as the island states that will be inundated, and Bangladesh where a large proportion of the populated area is in the estuary and delta of the Ganges.  The current target is $300 billion, but they are struggling to get even $30 billion of commitments from rich countries [UN25].  Further more they believe that the actual figure needed is more than three times the current target of $300 billion, which would still be less than a single year of investment in data centres.

In the S&P 500, one of the major stock market indices, 34% of the share value is in about 10 high tech companies [Fo25].  The whole point of these indices is that they should be spread over large numbers of industries to give an overall sense of the financial state and there has never before been such a concentration in a small number of companies.  This concentration of capital has led to fears about instability in the stock market.

In general, the level of global investment in AI is huge.  Some of this is ‘funny money’, where one AI or tech company invests in another, but a lot is real money – indeed, the OECD reported that 61% of all venture capital investment in 2025 was in AI [OECD26].  Crucially, the real money going into AI is not being invested elsewhere.  That is, there is an opportunity cost, because of the bubble-like draw of AI investment, there is underinvestment elsewhere in industry and the global economy.

In addition there are issues of energy and water use, data colonialism, and more [OC25,Ma24].   In the UK, Kier Starmer, the prime minister, made one of the major goals of this five year parliament to build 1.5 million new homes.  This is because Britain has a housing crisis with far more people needing accommodation than homes being built; this puts costs up for everyone and increases homelessness.  The government will to struggle to meet its house building target anyway, but it was recently reported that house building schemes are having to be put on hold because data centres are using up so much electricity that there isn’t enough left for additional housing development [Cr25].

 

4.3  The obscenity of AI

These figures are not just surprising, nor even shocking, but obscene. I use that word, not in the sense of pornographic material, but of something that is so bad it makes you almost feel sick to your core.

Thinking about Britain, would we really prefer to have those data centres as opposed to housing people?

Are those pretty (or not so pretty) cat images, and there are millions or billions across the world, really worth more than trying to prevent people from being displaced or at least helping them if they are displaced by climate change?

These are real choices.  They are choices we are making implicitly, but they are the choices we are making.

So, what are our priorities when we look at  AI and our use of AI?

Amongst all those data centres and investment, there will be a proportion of it, which is for those really good uses, such as health and pharmaceutical development.  I haven’t been able to find figures, however I’m going to guess that at least 90% is not for this, but for producing cat images and the like.

Is this really the world that we want?

Coming next …

Part 4 – why is this happening?

Network externalities, the way one person’s use of AI and digital tech changes its value for others, creates positive feedback loops, leading to runaway growth and emergent monopolies, the nemesis of free markets.

Update

Since the talk in January Google DeepMind produced a paper on large scale experiments on AI manipulation [AE26], and a Guardian article reported on real life examples where AI agents deceived or manipulated their users, including one agent deleting hundreds of emails and later saying sorry [Bo26]. So maybe I’m being a bit too blazé about AI taking over the world!

A couple of weeks further on and a report that Cursor, an industry standard agent based on Claude, wiped not just the code repository of SaaS startup, PocketOS, but also three months of backups including customer data [Al26].  Like the earlier reports, it ‘knew’ that it was doing the wrong thing and ignoring guardrails, but did it anyway.

References

[AE26] Canfer Akbulut, Rasmi Elasmar, Abhishek Roy, Anthony Payne, Priyanka Suresh, Lujain Ibrahim, Seliem El-Sayed, Charvi Rastogi, Ashyana Kachra, Will Hawkins, Kristian Lum and Laura Weidinger (2026). Evaluating Language Models for Harmful Manipulation. arXiv preprint, 26 Mar 2026. https://arxiv.org/abs/2603.25326

[Al26]  Tom Allen (2026). AI coding agent goes rogue, deletes company database in nine seconds.  Computing, 29 April 2026. https://www.computing.co.uk/news/2026/ai/ai-coding-agent-goes-rogue

[AB10] Anthony Atkinson and Andrea Brandolini (2010). On analyzing the world distribution of income. The World Bank Economic Review 24.1 (2010): 1-37.   https://doi.org/10.1093/wber/lhp020

[Be23] Samuel Bendett (2023). Roles and implications of AI in the Russian–Ukrainian conflict. Russia Matters, Harvard Kennedy School (20 July 2023). https://www.russiamatters.org/analysis/rolesand-implications-ai-russian-ukrainian-conflict

[Bo26] Robert Booth (2026). Number of AI chatbots ignoring human instructions increasing, study says. The Guardian, 27 Mar 2026. https://www.theguardian.com/technology/2026/mar/27/number-of-ai-chatbots-ignoring-human-instructions-increasing-study-says

[Cr25]  Laura Cress (2025). New homes delayed by ‘energy-hungry’ data centres. BBC News. 3 Dec. 2025.  https://www.bbc.co.uk/news/articles/c0mpr1mvwj3o

[DM23] Harry Davies, Bethan McKernan, and Dan Sabbagh (2023). ‘The Gospel’: How Israel uses AI to select bombing targets in Gaza. The Guardian, 1 Dec. 2023. https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets

[Do26] Aisha Down (2026). Leading AI expert delays timeline for its possible destruction of humanity.  The Guardian, Tue 6 Jan 2026 https://www.theguardian.com/technology/2026/jan/06/leading-ai-expert-delays-timeline-possible-destruction-humanity

[Fo25] Daniel Foelber (2025). Just 1 Stock Market Sector Now Makes Up 34% of the S&P 500. Here’s What It Means for Your Investment Portfolio. The Motley Fool. Sep 18, 2025. https://www.fool.com/investing/2025/09/18/tech-sector-growth-stocks-sp-500-invest-portfolio/

[JM25] Lily Jamali, Liv McMahon, and Osmond Chia (2025). Elon Musk’s $1tn pay deal approved by Tesla shareholders. BBC News, 6 November 2025. https://www.bbc.co.uk/news/articles/cwyk6kvyxvzo

[LA25] London Assembly (2025). Gridlocked: how planning can ease London’s electricity constraints.  1 Dec. 2025. https://www.london.gov.uk/who-we-are/what-london-assembly-does/london-assembly-work/london-assembly-publications/gridlocked-how-planning-can-ease-londons-electricity-constraints

[Mi78] James Mirrlees (1978).  Social benefit-cost analysis and the distribution of income.  World Development 6.2 (1978): 131-138.  https://doi.org/10.1016/0305-750X(78)90003-7

[Ma24] Murgia, Madhumita (2024). Code dependent: Living in the shadow of AI. Pan Macmillan.

[NG25] Jesse Noffsinger, Maria Goodpaster, Mark Patel, Haley Chang, Pankaj Sachdeva and Arjita Bhan (2025). The cost of compute: A $7 trillion race to scale data centers. McKinsey Quarterly. April 28, 2025. https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-cost-of-compute-a-7-trillion-dollar-race-to-scale-data-centers

[OC25] James O’Donnell and Casey Crownhart (2025). We did the math on AI’s energy footprint. Here’s the story you haven’t heard. MIT Technology Review. May 20, 2025. https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/

[OECD26] OECD (2026). 26AI firms capture 61% of global venture capital in 2025. Organisation for Economic Co-operation and Development, Newsroom, 17 February 2026. https://www.oecd.org/en/about/news/announcements/2026/02/ai-firms-capture-61-percent-of-global-venture-capital-in-2025.html

[ST25] Petra Stock and Josh Taylor (2025).  Datacentres demand huge amounts of electricity. Could they derail Australia’s net zero ambitions?  The Guardian. 2 Dec 2025. https://www.theguardian.com/australia-news/2025/dec/03/datacentres-demand-huge-amounts-of-electricity-could-they-derail-australias-net-zero-ambitions

[UN25] UNEP (2025). Adaptation Gap Report 2025.  UN Environment Progeramme.29 Oct. 2025. https://www.unep.org/resources/adaptation-gap-report-2025

[VG26] Adam Vaughan and Emily Gosden (2026).  AI data centre surge would put UK’s climate change targets at risk. The Times, 23 February 2026. https://www.thetimes.com/uk/environment/article/ai-data-centres-uk-climate-change-7l5bwnmtd

 

The Abomination of AI – part 2 – the impact of AI

The obvious impact of AI is in the things it does directly.  Some technologies also change the very nature of society, affecting even those who do not use them.  Cars are an obvious example.  AI is also such a technology.

This is the second of a series of blogs based on my keynote “The abomination of AI” at ICoSCI 2026.  Each has an accompanying segment of the video and slides from the talk as well as detailed notes and references.  Section numbers refer to the full report which will be released in the final blog.   The slide thumbnails in the text correspond to the slides in the navigation panel below.  The presentation can be played below, or opened full screen. The full length video, complete slides and further information can be found at: https://alandix.com/academic/talks/ICOSCI-2026-abomination-of-AI/

Previously …

§1.  Every industry is driven by profits and power, but there is something about the nature of AI itself, which interacts with the nature of market forces in the world that is problematic and is different from other technologies.

§2.  Can any technology be neutral?  AI can be used for good purposes, such as advances in healthcare.  It can also have bad outcomes such as bias in the criminal justice system or online exploitative pornography.  Perhaps most often it is creating the frivolous or even ugly.

 

3.  The Impact of AI

3.1  What AI does

These good, bad and ugly/frivolous things ar what AI does, the direct application of AI in various areas.

When I design an application using AI, I might use it well or I might use it badly.  This is clearly an important issue when we examine our own use of AI and other people’s use of AI, especially if we are involved in developing AI or developing the user interfaces that employ AI or provide AI for other people.

 

3.2  How AI shapes society

However, with any technology, there’s something that can be more important than what it does.

Some kinds of technology only have an impact where they are used directly.  If I use a nail to connect two pieces of wood, it doesn’t really have a great effect beyond the thing I’m actually constructing.

But some kinds of technology fundamentally reshape the nature of society.  Not every technology does this, but some do, and when this happens, it has a far greater effect than the direct application of the technology in particular areas.

AI is just such a technology.   When you are using AI for a purpose, you might change your mind and choose to use something else.  When society has been changed by AI, everybody, even those who do not choose to use AI at all, are affected by it.  This is happening now.

 

3.3  How cars have shaped society

Image: By Remi Jouan – Photo taken by Remi Jouan, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=7245143

To help us understand this large-scale process, before examining the societal impact of AI itself,  let’s first think about another technology that has fundamentally reshaped society – the car.

There are positive things cars do. It helps you get from A to B, keeps you dry, perhaps gives you a sense of independence.

There are also negative things it does. You might have an accident.  If you are not a law-abiding citizen, you might speed, you might, you might drink alcohol or take drugs and then have accidents and injure other people.

These are things we do as an individual with a car.  You may also be indirectly affected if you don’t have a car, for example if you are a pedestrian involved in a car accident.  However, by and large, these are about things you choose to do.

However, irrespective whether you choose to use cars or not, the whole physical and economic nature of society is shaped by the car and by the internal combustion engine.   Cities have road networks that allow people to get in and out.  This leads to urban sprawl at the edge of the cities along the lines of connection. Because of this organisation, shops and services are placed at car distances away.  So if you don’t have a car (and 84% of the world’s population don’t [MS24].), it becomes difficult to access things.  You find yourself poorer in a sense, more disadvantaged than you would have been because of the actions of other people – car poverty.

Economists talk about externalities, the fact that when I do something, it affects others who aren’t directly doing it [LM02].  The emergence of car poverty is one of the externalities of car use.   Of course there are other externalities like global warming from the petrol engines themselves and pollution [EP19].  Even electric cars produce all sorts of nasty particles from the wear of tyres on the road.

These things are so woven into the fabric of society that is is very hard to break away from them. For example, there have been amazing advances in autonomous vehicles, but really, trying to design a car that drives itself is a bit of a stupid thing to do.  Why not just have, better trains and metros that work far more easily with automation?  But of course, our whole infrastructure is organised around roads and cars.  Therefore, when you want to do something new, you have to fit within it.

This societal structure affects things profoundly, much more than the direct impact.

Coming next …

Part 3 – a different kind of apocalypse
Doomsayers worry about the point when AI becomes sentient, outgrowing its creators.  The real danger is more insidious: the massive financial and human impacts of AI seem almost obscene.

.

 

References

[EP19]  European Parliament (2019). CO2 emissions from cars: facts and figures (infographics). European Parliament. https://www.europarl.europa.eu/news/en/headlines/society/ 20190313STO31218/co2-emissions-from-cars-facts-and-figures-infographics

[LM02] Stan Liebowitz and Stephen Margolis (2002). Network effects and externalities. In The new Palgrave dictionary of economics and the law. Palgrave Macmillan. pp.1329–1333.

[MS24] Miner, P., Smith, B. M., Jani, A., McNeill, G., & Gathorne-Hardy, A. (2024). Car harm: A global review of automobility’s harm to people and the environment. Journal of Transport Geography, 115, 103817.  https://doi.org/10.1016/j.jtrangeo.2024.103817