The Abomination of AI – part 5 – digital and AI breaks market economics

The very nature of digital technology and AI breaks free markets leading to runaway inequality, even with the best intentions of industry … but some tech companies further exploit these effects.

This is the fifth of a series of blogs based on my keynote “The abomination of AI” at ICoSCI 2026.  Each has an accompanying segment of the video and slides from the talk as well as detailed notes and references.  Section numbers refer to the full report which will be released in the final blog.   The slide thumbnails in the text correspond to the slides in the navigation panel below.  The presentation can be played below, or opened full screen. The full length video, complete slides and further information can be found at: https://alandix.com/academic/talks/ICOSCI-2026-abomination-of-AI/

Previously …

§1.  Every industry is driven by profits and power, but there is something about the nature of AI itself, which interacts with the nature of market forces in the world that is problematic and is different from other technologies.

§2.  Can any technology be neutral?  AI can be used for good purposes, such as advances in healthcare.  It can also have bad outcomes such as bias in the criminal justice system or online exploitative pornography.  Perhaps most often it is creating the frivolous or even ugly.

§3.  The obvious impact of AI is in the things it does directly. Some technologies also change the very nature of society, affecting even those who do not use them. Cars are an obvious example.  AI is also such a technology.

§4.  Doomsayers worry about the point when AI becomes sentient, outgrowing its creators.  The real danger is more insidious: the massive financial and human impacts of AI seem almost obscene.

§5.1.  §5.2.   Network externalities, the way one person’s use of AI and digital tech changes its value for others, creates positive feedback loops, leading to runaway growth and emergent monopolies, the nemesis of free markets.

5.3  Digital and AI breaks market economics

So digital technology breaks market economics.  Yet this is what our whole world is built on.  Even countries that are not fully market economies, such as China, often rely upon market economics extensively, both internally and globally.  Indeed, market economics has driven so nearly all of the late 20th century trade and much before that, including the industrial revolution.  Now market economics has not been good for everything and certainly not for everybody, but it’s had elements of success.  And now it is broken.

Digital technology breaks market economics and AI makes it worse.

One of the ways that AI makes this worse is that the new AI, large language models and the like, are built on big data and big computation.  This means that they require big business … really big business, a business that’s bigger than most countries, in order to get in the game.

But once you’re in that game, then you’ve got the, the volumes of people using your systems, generating more data, perhaps the, the power to, to leverage and try and encourage other people to give you data.  For example, this can include governments giving certain companies, sometimes exclusive, access to public health data.  And of course this then means the successful companies have the money to invest in more data centres to process that data.

Again, a positive feedback loop going there that’s exacerbated by the huge computational and data needs of AI.

And of course that has effects, it has environmental impact as seen in the data about energy and water use.

But also because the companies have to be so big, you end up with possibly a democratic deficit.  This was very evident in America when you saw all the sort of the take the ‘tech bros’ surrounding Trump in the inauguration.  Although there have to be some fallouts between some of them since, that power of big business has very evident.  And that’s even in the US, smaller countries really have to struggle because the businesses are bigger than they are.

 

5.4  With the best will in the world …

So Digital plus AI, by its very nature, leads to runaway inequality.  You have to work hard to stop that happening.

This doesn’t mean you can’t.  As we discussed, in our body’s immune system, we have positive feedback loops that are important to fight infection.  These would lead to autoimmune diseases if unchecked, but they are modified by negative feedback loops that control them.  Similarly, the macro-economic feedback loops of digital technology and AI are not unstoppable, but the natural progression is just for them to keep on going.

Now this potentially runaway growth of AI happens even if everybody plays nice.  It is not about evil owners of AI companies who are trying to control the world.  With the best will in the world, this will happen.

But, of course, they don’t always have the best will in the world.

Some of the problem is baked into our commercial legal systems.  In the UK, if you are on the board of directors of company, your legal responsibility is to your shareholders, which typically means profit maximisation.  So even if you might have liked to do something better for society or the world, you are legally bound to do the thing that maximises profits.

So, the leaders of big AI are almost forced not to do the right thing, but it varies on the individuals how much they lean into that.

 

5.5  … Or not

Facebook internal strategy document quoted by Cory Doctorow  [D025]

In 2025 Meta, the owner of Facebook, was in the midst of an anti-trust case in the US regarding their takeover of Instagram in the early 2010s [Da25].  The US Government eventually lost their case against Meta, due largely to the emergence of TikTok as a competitor in the meantime  However, as part of the case various internal Facebook documents came into the public domain.  Cory Doctorow , the open software campaigner, quotes form one internal strategy document, which showed that Mark Zuckerberg and Facebook understood precisely the role of emergent digital monopolies:

“Social networks have two stable equilibria: either everyone uses them, or no-one uses them.” [Do25]

“… The binary nature of social networks implies that there should exist a tipping point, ie some critical mass of adoption, above which a network will organically grow, and below which it will shrink.”

Other emails show that this understanding did lead to very deliberate attempts stifle Instagram’s growth [Da25].  That is Facebook were very aware of network effects and the presence of tipping points, and prepared to use techniques to ensure that they are the side of that critical mass that they wanted to be.

These statements were made in a largely pre-AI context (at least on the way it is understood today), with regard to the role of emergent monopolies for social media, but of course intensified by AI.  I’m sure Meta was not and is not alone in being aware of these effects and being prepared  use them.

Coming next …

Part 6 – should we worry?

Runaway growth of AI is not painless – opportunity costs of investment and human costs of lost jobs.  Gains may be transitory – buy-now-pay-later tech risk tying users into spiralling costs.

.

 

References

[Da25] David Dayen (2025). The Government Has Already Won the Meta Case. The American Prospect, April 16, 2025. https://prospect.org/2025/04/16/2025-04-16-government-already-won-meta-case-tiktok-ftc-zuckerberg/

[Do25] Cory Doctorow (2025).Mark Zuckerberg personally lost the Facebook antitrust case. Pluralistic. Apr 18, 2025. https://pluralistic.net/2025/04/18/chatty-zucky/

 

The Abomination of AI – part 4 – why is this happening?

Network externalities, the way one person’s use of AI and digital tech changes its value for others, creates positive feedback loops, leading to runaway growth and emergent monopolies, the nemesis of free markets.

This is the fourth of a series of blogs based on my keynote “The abomination of AI” at ICoSCI 2026.  Each has an accompanying segment of the video and slides from the talk as well as detailed notes and references.  Section numbers refer to the full report which will be released in the final blog.   The slide thumbnails in the text correspond to the slides in the navigation panel below.  The presentation can be played below, or opened full screen. The full length video, complete slides and further information can be found at: https://alandix.com/academic/talks/ICOSCI-2026-abomination-of-AI/

Previously …

§1.  Every industry is driven by profits and power, but there is something about the nature of AI itself, which interacts with the nature of market forces in the world that is problematic and is different from other technologies.

§2.  Can any technology be neutral?  AI can be used for good purposes, such as advances in healthcare.  It can also have bad outcomes such as bias in the criminal justice system or online exploitative pornography.  Perhaps most often it is creating the frivolous or even ugly.

§3.  The obvious impact of AI is in the things it does directly. Some technologies also change the very nature of society, affecting even those who do not use them. Cars are an obvious example.  AI is also such a technology.

§4.  Doomsayers worry about the point when AI becomes sentient, outgrowing its creators.  The real danger is more insidious: the massive financial and human impacts of AI seem almost obscene.

5  Why is this happening?

Why is this happening?  Well, we know the world is unequal, we know that the way free markets work mean that big companies often get economies of scale and get larger.  Is it just a natural thing that the same is happening with AI?

The answer is ‘no’, this is clear from the way AI stocks have performed in ways unlike any previous (legitimate) business.  There are elements of the normal operation of markets, but there are particular properties of digital technology in general and AI in particular that break aspects of market economics and lead to emergent monopolies.

These are due to positive feedback loops.  If you are from an engineering background you’ll know about these, but for those who aren’t we’ll take a little segue to look at positive feedback loops in general and then come back to how that applies in the economic sense.

 

5.1  Understanding feedback loops

Image: By Charles Schmitt – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=44338386

Feedback loops are everywhere.  The term simply means a process where the output in some way influences the future input.

One type is called a negative feedback loop, where a change in the input creates an effect that counters the change.  This can be engineered.  The classic example of this is the universal controller for a steam engine which keeps the engine running at a set speed. It consists of a set of steel balls on arms that spin as the engine spins.  The spinning of the arms means the steel balls rise due to centrifugal force, which then opens a valve reducing the pressure of the steam and hence the speed of the engine.  If the engine turns too slowly the balls fall, shutting the valve, increasing the steam pressure and hence the speed of the engine. Notice that this negative feedback effect leads to stability and balance,

In geometric shapes, you often see smoothness when there are negative feedback effects.  A water drop is a smooth sphere because any small disturbance on the surface tends to get counteracted by the surface tension. So any little dents fill back in again very rapidly.  Again the negative feedback loop creates a stable balance.

Positive feedback effects are when the output that is produced reinforces the original change. Think about a microphone being put near a speaker and the screech you get – that is a classic positive feedback effect – instability and extremes.  In physical structures  positive feedback effects often lead to sharp edges, like a snowflake. As the snowflake forms any sharp point attracts more ice formation and therefore grows.

Positive feedback often leads to tipping points where you get sudden changes and hysteresis where you have changes, which are hard to reverse.  Many climate change issues are of this kind.

This sounds as though positive feedback is a bad idea, but positive feedback can be really powerful.  Snowflakes are beautiful and they happen because of this!  In our bodies our immune system has some positive feedback cycles so that our bodies can react very rapidly.  Positive feedback often leads to exponential growth, and here the immune system can ramp up very quickly to fight infections.  However, useful positive feedback is usually wrapped around with controls that create a negative feedback, which stops them getting too extreme.

So it’s not the positive feedbacks are bad per se and negative ones good.  However, it often feels as though they should be labelled the other way round, as positive feedback on its own tends to have these runaway effects and nobody wants a screeching microphone!

 

5.2  Network effects / externalities

Image: https://en.m.wikipedia.org/wiki/File:Microsoft_Office_Word_%282019%E2%80%93present%29.svg

Human society has many networks, some mediated by technology, some by our normal human relationships, such as networks of people that know one another, or business contacts with each other.  Some of these are within a single group, some are more structured, such as the way teachers are connected with the children they teach, who in turn have parents, who may themselves know each other or talk to teachers at parents’ evenings.

Crucially though, these human social networks change the value of digital goods.  To be precise they can change the value of other kinds of goods as well, but particularly digital ones.

If your colleagues all use Microsoft Word, then it makes more sense that you use Microsoft Word rather than, say Apple Pages.  I use PowerPoint for presentations largely because I often want to share slides with other people, even though I work on a Mac and Keynote might be better for some effects.

These are positive feedback cycles.  If I use something, it makes it of more value for you to use the same thing.  If you use it, it makes it of more value for me to use it.  Like all positive feedback, this is leads to runaway effects.

Image: By Calistemon – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=127909261

Now for a little bit of economics.  Market economics assumes that markets are open, that is it is possible for new businesses to start in an area and compete with existing ones, often leading to more efficient production.  The arguments for why market economics work (to the extent they do, and there’s limits to that) are predicated on this openness.  So, when monopolies happen, there are problems.

You can get natural monopolies where there’s a single resource that is rare and only one or a small number of people control.  Just as many of the rare earths are found in China.  This is a natural phenomenon and can cause problems, hence worries about finding alternative sources or alternative materials.

Sometimes monopolies can be engineered when a group of people in a sector come together to agree to keep the price high, or restrict outputs.  Most countries have antitrust laws or anti-monopoly laws, which try to ban this behaviour so that new players can come into a market and it doesn’t get controlled.

The trouble with network effects is that the positive feedback leads to a winner takes all situation.  The issue first hit the headlines back in 2001 concerning Microsoft’s bundling of Internet Explorer [LM01], but applies to much other software.  So it’s very hard to have even have two successful software products in an area, say Keynote and PowerPoint, let alone lots of different presentation software, because if one person uses it then it changes its value for everybody else.  This is an emergent monopoly.

Note, this is not because the manufacturers get together and to something underhand.  It is just a natural impact of digital technology, which you have to work hard to avoid.  There are ways of doing this: you can ensure open standards, for example; the fact that PPTX format is an open format, means it’s possible for other products to use it and interoperate with PowerPoint.

So there are ways you can counter the worst effects, but the natural impact is often for digital goods to give rise to these emergent monopolies.

Coming next …

Part 5 – digital and AI breaks market economics

The very nature of digital technology and AI breaks free markets leading to runaway inequality, even with the best intentions of industry … but some tech companies further exploit these effects.

.

 

References

[LM01] Liebowitz, S., and Margolis, S. (2001). Network effects and the Microsoft case. Chapter 6 in Dynamic competition and public policy: Technology, innovation, and antitrust issues, J. Ellig (ed.), pp.160–192. https://personal.utdallas.edu/~liebowit/netwext/ellig%20paper/ellig.htm

 

The Abomination of AI – part 3 – a different kind of apocalypse

Doomsayers worry about the point when AI becomes sentient, outgrowing its creators.  The real danger is more insidious: the massive financial and human impacts of AI seem almost obscene.

This is the third of a series of blogs based on my keynote “The abomination of AI” at ICoSCI 2026.  Each has an accompanying segment of the video and slides from the talk as well as detailed notes and references.  Section numbers refer to the full report which will be released in the final blog.   The slide thumbnails in the text correspond to the slides in the navigation panel below.  The presentation can be played below, or opened full screen. The full length video, complete slides and further information can be found at: https://alandix.com/academic/talks/ICOSCI-2026-abomination-of-AI/

Previously …

§1.  Every industry is driven by profits and power, but there is something about the nature of AI itself, which interacts with the nature of market forces in the world that is problematic and is different from other technologies.

§2.  Can any technology be neutral?  AI can be used for good purposes, such as advances in healthcare.  It can also have bad outcomes such as bias in the criminal justice system or online exploitative pornography.  Perhaps most often it is creating the frivolous or even ugly.

§3.  The obvious impact of AI is in the things it does directly. Some technologies also change the very nature of society, affecting even those who do not use them. Cars are an obvious example.  AI is also such a technology.

4.  A different kind of apocalypse

Image: [Do26]

The term ‘abomination’ conjures up apocalyptic images – something so malign and powerful that it either destroys the world itself, or through its influence drives others to mutual despoliation or annihilation.

Now I’m talking about this regarding the nature of AI, how AI changes society, so it is indeed a bit apocalyptic!

There are different kinds of these apocalyptic views regarding AI.  A global war machine let loose on humanity envisaged in Terminator, was distant science fiction when the films were first released, but sound prescient as the war in Ukraine is fought by drones hunting humans and, in the Gaza and succeeding conflicts, Israel’s military decisions have been increasingly taken by AI [Be23,DM23].  While a Terminator-style takeover still feels pretty distant, an accidental conflagration much less so.

Many fears centre around  the singularity – the point at which AI becomes capable of designing itself leading to runaway developments of which we have no control. Related to this is the point at which AI becomes self-aware and maybe decides that humans are rivals to be squashed, or maybe simply pushed aside as irrelevant.  An ex-OpenAI expert Daniel Kokotajlo recently announced that true AGI (artificial general intelligence) was not as imminent as first envisaged, and gave the world a reprieve until 2034 [Do26] – well we can all heave a sigh of relief.

While this form of disaster scenario should not be ignored entirely, there are more immediate worries.  Without being sentient nor omnipotent AI is transforming the world.

 

4.1  The end comes quietly,

Disaster scenarios make good Hollywood movies, but often the end comes quietly.  In the past some empires and civilisations have collapsed entirely, but more often there is a slow decay, a series of more minor crises and a gradual withering from within.

It is this more insidiousness impact of AI that concerns me.

 

4.2  Facts and figures

Let’s consider some facts and figures about AI.  Some involve estimates that have varying levels of  confidence, but altogether paint a picture.

First is the announcement that Tesla had negotiated a $1 trillion salary settlement with Elon Musk [JM25].  This is a 10 year deal, and a lot of it is in stocks and shares, so you could argue whether it’s real money or not, but it is still substantial.  Or rather not just substantial, but enormous.  This is a trillion dollars, not a million, nor even a billion, but a million million.  A trillion dollars is $3,000 for each man, woman, and child in the US or, over 10 years, about $300 per year.

I first studied economics in the late 1970s.  All societies are unequal, and there is a well-known rule that the high-end tail of incomes in western countries follow an approximate 1/XK rule (with K~2), where the number of earners for a particular income is inversely proportional to the square of the income, or smaller [Mi78,].  This means that there are few people with vast amounts and lots of people with much less. But the people with huge amounts are few enough that they didn’t make a huge difference to the overall picture.  If the income of the rich were to have been spread over all of society it made almost no difference.  Overall the volume of money was in the middle income range.

This has important implications.  Market economies orient themselves to make the most efficient use of resources where the most money is, that is the middle income range.  Now that’s bad news if you’re rich, because your money gets used less efficiently – each dollar doesn’t buy as much as it might, but you’re rich enough anyway.  This is more of a problem if you’re really poor, as goods for the poorest are not optimised to the same extent as for the middle.

The middle income area has also driven taxation policy.  In the past if you placed a large tax on the richest, it might make people feel it was fairer, but had a relatively small impact on total taxes gathered as the volume of money was still in the middle income ranges.

This rule held throughout the latter half of the 20th century, but has changed.  We are witnessing a level of inequality here that hasn’t been seen probably for hundreds of years, possibly thousands, maybe even since the age of the ancient empires. This is quite surprising to say the least.

In the UK, a recent report that said that, while less than 10% of energy was currently being used in data centres, this is due to rise by 600% (six times greater) by 2050 [Cr25,LA25].  That’s a lot, even if you take into account changes in other forms of energy use – a big percentage of UK energy use is going to be in data centres [VG26]. In Australia electricity use in data centres is projected to exceed use by electric cars by 2030 [ST25].

Another recent report said there was expected to be $6.7 trillion investment in data centres globally in the next five years [NG25].  That’s about $1.3 trillion a year.  At nearly the same time, at COP 25, they were trying to get countries to agree to a $300 billion (not trillion, billion) budget to help the countries worst hit by climate change; places such as the island states that will be inundated, and Bangladesh where a large  proportion of the populated areas is in the estuary and delta of the Ganges.  The current target is $300 billion, but they are struggling to get even $30 billion of commitments from rich countries [UN25].  Further more they believe that the actual figure needed is more than three times the current target of $300 billion, which would still be less than a single year of investment in data centres.

In the S&P 500, one of the major stock market indices, 34% of the share value is in about 10 high tech companies [Fo25].  The whole point of these indices is that they should be spread over large numbers of industries to give an overall sense of the financial state and there has never before been such a concentration in a small number of companies.  This concentration of capital has led to fears about instability in the stock market.

In general, the level of global investment in AI is huge.  Some of this is ‘funny money’, where one AI or tech company invests in another, but a lot is real money – indeed, the OECD reported that 61% of all venture capital investment in 2025 was in AI [OECD26].   Crucially,  the real money going into AI is not being invested elsewhere.  That is, there is an opportunity cost, because of the bubble-like draw of AI investment, there is underinvestment elsewhere in industry and global economy.

In addition there are issues of energy and water use, data colonialism, and more [OC25,Ma24].   In the UK, Kier Starmer, the prime minister, made one of the major goals of this five year parliament to build 1.5 million new homes.  This is because Britain has a housing crisis with far more people needing accommodation than homes being built; this puts costs up for everyone and increases homelessness.  The government will to struggle to meet its house building target anyway, but it was recently reported that housing schemes are having to be put on hold because data centres are using up so much electricity that there isn’t enough left for additional housing development [Cr25].

 

4.3  The obscenity of AI

These figures are not just surprising, nor even shocking, but obscene. I use that word, not in the sense of pornographic material, but of something that is so bad it makes you almost feel sick to your core.

Thinking about Britain, would we really prefer to have those data centres as opposed to housing people?

Are those pretty (or not so pretty) cat images, and there are millions or billions across the world, really worth more than trying to prevent people from being displaced or at least helping them if they are displaced by climate change?

These are real choices.  They are choices we are making implicitly, but they are the choices we are making.

So what are our priorities when we look at  AI and our use of AI?

Amongst all those data centres and investment, there will be a proportion of it, which is for those really good uses, such as health and pharmaceutical development.  I haven’t been able to find figures, however I’m going to guess that at least 90% is not for this, but is producing cat images and the like.

Is this really the world that we want?

Coming next …

Part 4 – why is this happening?

Network externalities, the way one person’s use of AI and digital tech changes its value for others, creates positive feedback loops, leading to runaway growth and emergent monopolies, the nemesis of free markets.

Update

Since the talk in January Google DeepMind produced a paper on large scale experiments on AI manipulation [AE26], and a Guardian article reported on real life examples where AI agents deceived or manipulated their users, including one agent deleting hundreds of emails and later saying sorry [Bo26]. So maybe I’m being a bit too blazé about AI taking over the world!

References

[AE26] Canfer Akbulut, Rasmi Elasmar, Abhishek Roy, Anthony Payne, Priyanka Suresh, Lujain Ibrahim, Seliem El-Sayed, Charvi Rastogi, Ashyana Kachra, Will Hawkins, Kristian Lum and Laura Weidinger (2026). Evaluating Language Models for Harmful Manipulation. arXiv preprint, 26 Mar 2026.
https://arxiv.org/abs/2603.25326

[AB10] Anthony Atkinson and Andrea Brandolini (2010). On analyzing the world distribution of income. The World Bank Economic Review 24.1 (2010): 1-37.   https://doi.org/10.1093/wber/lhp020

[Be23] Samuel Bendett (2023). Roles and implications of AI in the Russian–Ukrainian conflict. Russia Matters, Harvard Kennedy School (20 July 2023). https://www.russiamatters.org/analysis/rolesand-implications-ai-russian-ukrainian-conflict

[Bo26] Robert Booth (2026). Number of AI chatbots ignoring human instructions increasing, study says. The Guardian, 27 Mar 2026. https://www.theguardian.com/technology/2026/mar/27/number-of-ai-chatbots-ignoring-human-instructions-increasing-study-says

[Cr25]  Laura Cress (2025). New homes delayed by ‘energy-hungry’ data centres. BBC News. 3 Dec. 2025.  https://www.bbc.co.uk/news/articles/c0mpr1mvwj3o

[DM23] Harry Davies, Bethan McKernan, and Dan Sabbagh (2023). ‘The Gospel’: How Israel uses AI to select bombing targets in Gaza. The Guardian, 1 Dec. 2023. https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets

[Do26] Aisha Down (2026). Leading AI expert delays timeline for its possible destruction of humanity.  The Guardian, Tue 6 Jan 2026 https://www.theguardian.com/technology/2026/jan/06/leading-ai-expert-delays-timeline-possible-destruction-humanity

[Fo25] Daniel Foelber (2025). Just 1 Stock Market Sector Now Makes Up 34% of the S&P 500. Here’s What It Means for Your Investment Portfolio. The Motley Fool. Sep 18, 2025. https://www.fool.com/investing/2025/09/18/tech-sector-growth-stocks-sp-500-invest-portfolio/

[JM25] Lily Jamali, Liv McMahon, and Osmond Chia (2025). Elon Musk’s $1tn pay deal approved by Tesla shareholders. BBC News, 6 November 2025. https://www.bbc.co.uk/news/articles/cwyk6kvyxvzo

[LA25] London Assembly (2025). Gridlocked: how planning can ease London’s electricity constraints.  1 Dec. 2025. https://www.london.gov.uk/who-we-are/what-london-assembly-does/london-assembly-work/london-assembly-publications/gridlocked-how-planning-can-ease-londons-electricity-constraints

[Mi78] James Mirrlees (1978).  Social benefit-cost analysis and the distribution of income.  World Development 6.2 (1978): 131-138.  https://doi.org/10.1016/0305-750X(78)90003-7

[Ma24] Murgia, Madhumita (2024). Code dependent: Living in the shadow of AI. Pan Macmillan.

[NG25] Jesse Noffsinger, Maria Goodpaster, Mark Patel, Haley Chang, Pankaj Sachdeva and Arjita Bhan (2025). The cost of compute: A $7 trillion race to scale data centers. McKinsey Quarterly. April 28, 2025. https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-cost-of-compute-a-7-trillion-dollar-race-to-scale-data-centers

[OC25] James O’Donnell and Casey Crownhart (2025). We did the math on AI’s energy footprint. Here’s the story you haven’t heard. MIT Technology Review. May 20, 2025. https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/

[OECD26] OECD (2026). 26AI firms capture 61% of global venture capital in 2025. Organisation for Economic Co-operation and Development, Newsroom, 17 February 2026. https://www.oecd.org/en/about/news/announcements/2026/02/ai-firms-capture-61-percent-of-global-venture-capital-in-2025.html

[ST25] Petra Stock and Josh Taylor (2025).  Datacentres demand huge amounts of electricity. Could they derail Australia’s net zero ambitions?  The Guardian. 2 Dec 2025. https://www.theguardian.com/australia-news/2025/dec/03/datacentres-demand-huge-amounts-of-electricity-could-they-derail-australias-net-zero-ambitions

[UN25] UNEP (2025). Adaptation Gap Report 2025.  UN Environment Progeramme.29 Oct. 2025. https://www.unep.org/resources/adaptation-gap-report-2025

[VG26] Adam Vaughan and Emily Gosden (2026).  AI data centre surge would put UK’s climate change targets at risk. The Times, 23 February 2026. https://www.thetimes.com/uk/environment/article/ai-data-centres-uk-climate-change-7l5bwnmtd

 

The Abomination of AI – part 2 – the impact of AI

The obvious impact of AI is in the things it does directly.  Some technologies also change the very nature of society, affecting even those who do not use them.  Cars are an obvious example.  AI is also such a technology.

This is the second of a series of blogs based on my keynote “The abomination of AI” at ICoSCI 2026.  Each has an accompanying segment of the video and slides from the talk as well as detailed notes and references.  Section numbers refer to the full report which will be released in the final blog.   The slide thumbnails in the text correspond to the slides in the navigation panel below.  The presentation can be played below, or opened full screen. The full length video, complete slides and further information can be found at: https://alandix.com/academic/talks/ICOSCI-2026-abomination-of-AI/

Previously …

§1.  Every industry is driven by profits and power, but there is something about the nature of AI itself, which interacts with the nature of market forces in the world that is problematic and is different from other technologies.

§2.  Can any technology be neutral?  AI can be used for good purposes, such as advances in healthcare.  It can also have bad outcomes such as bias in the criminal justice system or online exploitative pornography.  Perhaps most often it is creating the frivolous or even ugly.

 

3.  The Impact of AI

3.1  What AI does

Okay, so these good, bad and ugly/frivolous are things that AI does, the direct application of AI in various areas.

When I design an application using AI, I might use it well or I might use it badly.  This is clearly an important issue when we examine our own use of AI and other people’s use of AI, especially if we are involved in developing AI or developing the user interfaces that employ AI or provide AI for other people.

 

3.2  How AI shapes society

However, with any technology, there’s something that can be more important than what it does.

Some kinds of technology only have an impact where they are used directly.  If I use a nail to connect two pieces of wood, it doesn’t really have a great effect beyond the thing I’m actually constructing.

But some kinds of technology fundamentally reshape the nature of society.  Not every technology does this, but some do, and when this happens, it has a far bigger effect than the direct application of the technology in particular areas.

AI is just such a technology.   When you are using AI for a purpose, you might change your mind and choose to use something else.  When society has been changed by AI, everybody, even those who do not choose to use AI at all, are affected by it.  This is happening now.

 

3.3  How cars have shaped society

Image: By Remi Jouan – Photo taken by Remi Jouan, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=7245143

To help understand this large-scale process, before looking at the societal impact of AI itself,  let’s first look at another technology that has fundamentally reshaped society – the car.

There are positive things cars do. When you get into a car, it does things for you. It helps you get from A to B, keeps you dry, perhaps gives you a sense of independence.

There are also negative things it does. You might have an accident.  If you are not a law-abiding citizen, you might speed, you might, you might drink alcohol or take drugs and then have accidents and injure other people.

These are things we do as an individual with a car.  You may also be indirectly affected if you don’t have a car, for example if you are a pedestrian, you might still be involved in a car accident.  However, by and large these are about things you choose to do.

However, irrespective whether you choose to use cars or not, the whole physical and economic nature of society is shaped by the car and by the internal combustion engine.   Cities have road networks that allow people to get in and out.  This leads to urban sprawl at the edge of the cities along the lines of connection. Because of this organisation, shops and services are placed at car distances away.  So if you don’t have a car (and 84% of the world’s population don’t [MS24].), it becomes difficult to access things.  You find yourself poorer in a sense, more disadvantaged than you would have been because of the actions of other people – car poverty.

Economists talk about externalities, the fact that when I do something, it affects others who aren’t directly doing it [LM02].  The emergence of car poverty is one of the externalities of car use.   Of course there are other externalities like global warming from the petrol engines themselves and pollution [EP19].  Even electric cars produce all sorts of nasty particles from the wear of tyres on the road.

These things are so woven into the fabric of society that is is very hard to break away from them. For example, there have been amazing advances in autonomous vehicles, but really, trying to design a car that drives itself is a bit of a stupid thing to do.  Why not just have, better trains and metros that work far more easily with automation?  But of course, our whole infrastructure is organised around roads and cars.  Therefore, when you want to do something new, you have to fit within it.

This societal structure changes things dramatically, much more than the direct impact.

Coming next …

Part 3 – a different kind of apocalypse
Doomsayers worry about the point when AI becomes sentient, outgrowing its creators.  The real danger is more insidious: the massive financial and human impacts of AI seem almost obscene.

.

 

References

[EP19]  European Parliament (2019). CO2 emissions from cars: facts and figures (infographics). European Parliament. https://www.europarl.europa.eu/news/en/headlines/society/ 20190313STO31218/co2-emissions-from-cars-facts-and-figures-infographics

[LM02] Stan Liebowitz and Stephen Margolis (2002). Network effects and externalities. In The new Palgrave dictionary of economics and the law. Palgrave Macmillan. pp.1329–1333.

[MS24] Miner, P., Smith, B. M., Jani, A., McNeill, G., & Gathorne-Hardy, A. (2024). Car harm: A global review of automobility’s harm to people and the environment. Journal of Transport Geography, 115, 103817.  https://doi.org/10.1016/j.jtrangeo.2024.103817

 

The Abomination of AI – part 1 – setting the scene

AI can be used for good or bad purposes as well as frivolous time wasting!  However, there are also more large-scale impact of AI as it interacts badly with the processes of the global free market simultaneously amplifying the least satisfactory aspects of the free market and at the same time undermining the fundamental assumptions of of market economics.  The resulting runaway effects pose an existential risk to democracy and human dignity.

This is the first of a series of blogs based on my keynote “The abomination of AI” at ICoSCI 2026.  Each has an accompanying segment of the video and slides from the talk as well as detailed notes and references. Section numbers refer to the full report which will be released in the final blog.   The slide thumbnails in the text correspond to the slides in the navigation panel below.  The presentation can be played below, or opened full screen. The full length video, complete slides and further information can be found at: https://alandix.com/academic/talks/ICOSCI-2026-abomination-of-AI/

AI can be used for tremendous good, not least in medicine, as well as frivolous and dangerous uses, such as exploitative online pornography.  However, it also has large scale structural impacts on the very nature of our world.  The levels of financial investment in AI development and the financial and environmental costs of data centres, can seem obscene, especially as climate change and political instability is threatening to tear down the apparent stability of the late 20th century.  AI has intensified some of the feedback effects of digital technology creating unprecedented emergent monopolies, that leave nations as well as individuals feeling all but powerless.  These are huge issues, and ones that countries, including Malaysia, are struggling to cope with.  However, there are also positive actions we can take as researchers and designers to ameliorate some of the problems and in the process create better and more resilient products that really serve people.

1.  Introduction

The word ‘abomination’ is not widely used, and sounds apocalyptic, often with religious connotations.  Here I’m using it in its broader sense of something that is awful to the point of being at the edge of evil.

And that sounds a very strong thing to say about AI itself.  In fact I’m taking more about the AI industry, but not simply the fact that it is an industry governed by profits and power, that is true of many industries such as oil or plastics.  AI is special.  There is something about the nature of AI itself, which interacts with the nature of market forces in the world that is problematic and is different from other technologies.

I’ve touched upon this issue before in other talks and writing, but this is the first time I’ve focused on it centrally.

1.1  Projects and People

The ideas hare are closely related to two projects, one past, one current.  First is Not-Equal (https://not-equal.tech), which was an EPSRC Network Grant finding a programme of work related to the digital economy and social justice [CC25]; I led the algorithmic social justice strand. Clara Crivellaro, who was the overall project lead, and I are in the process of witing a book on Algorithmic Social Justice in the CRC/T&F AI for Everything series.  Then issues in this talk will form part of one of the chapters in this.

Second is an EU Horizon project TANGO (https://tango-horizon.eu/) investigating human machine decision making.  This is very much looking at the ways in which AI can be used more positively in specific systems and decision making situations, including public policy.  However less about the macro-economic issues in this talk.

2.  Neutral Technology?

So there is a sort of a myth that technology is neutral.  As researchers, particularly in university, you do your work and come up with new ideas or technology, but how it’s used is up to other people.  It’s up to the politicians; it’s up to industry – not for us to worry about.  This idea of technology neutrality has been heavily critiqued over the years: saying, “we just gave them the guns, we didn’t pull the trigger”, just doesn’t sound convincing!

Of course there is some truth in the neutrality.  Most technologies can be used in good ways or bad ways, but for some technologies, say nerve poisons, there is clearly some aspects that drive it one way rather than another.

The title ‘abomination of AI’ sounds very negative, but at the scale of individual applications of technology, is certainly not like nerve poison!  AI can be used in good ways and bad ways, just like pretty much any technology.  So while, this talk is focusing on certain intrinsic dangers of AI, I certainly don’t think everything about AI is bad, otherwise I wouldn’t be writing textbooks about it.

The dangers I’ll be highlighting are at a macroeconomic scale, and are pretty negative, so after discussing these we’ll return to some of the constructive things that you can do within your discipline or work to help ameliorate some of the bad things.

Before that, let’s look at the smaller scale of individual applications of AI, good, bad and …

 

2.1  The Good – health and UX

Images: [NF24],  CSBIOPASSION, CC BY-SA 4.0
<https://creativecommons.org/licenses/by-sa/4.0>, via Wikimedia Commons.  https://commons.wikimedia.org/wiki/File:C12orf29_AlphaFold.png

There are clearly some wonderful things being achieved with AI, not least some of the amazing advances in medicine and health that have been happening because of AI.  You may recall the 2024 Nobel Prize for chemistry was shared between a chemist and two AI researchers [NF24]; the latter for their role in developing AlphaFold which has revolutionised protein synthesis  [JE21].

Closer to home, in my book AI for HCI  [Dx26b] I look at the ways AI can help in user interface design and creating better computer systems for people

 

 

2.2  The Bad

Bias and discrimination

Paper: [Dx92]

Back in 1992, I first wrote about the dangers of ethnic, gender and social bias in particular in black box machine learning algorithms [Dx92].  To be honest, at that point, I thought it was going become a real issue in the next few years.  However, that was just before the big AI winter, so in fact, it got put off for 25 years or so.

Paper: [Dx92] Images: [Da21,Gl21,Ma21,Bu21]

But now, of course,  bias is a really critical issue often in the press, including problems with facial recognition systems : [Da21,Gl21,Ma21,Bu21].  In the US court system there is extensive controversy about the use of systems that recommend whether you give people parole or not [AL16,LM16].

 

Online exploitative pornography

Images: [CH26,MC26]

Another issue that has been hot in the press is the use of online platforms to produce exploitative pornography using AI.  While the UK was still wringing its hands deciding what to do, Malaysia and Indonesia led the world banning Grok [CH26,MC26].  .  Even for a country, standing up to industries as big as X and Elon Musk is no small thing. In fact Musk did partially backtracked on Grok, and while still limited, it does show that the global steamroller of AI is not inevitable.

 

2.3  The Ugly … or simply frivolous

Image: [Wa24]

So there are some really good uses of AI and some bad ones, but for the general public, the majority, while not always ugly are at best frivolous.  The world is filled with images of cats on skateboards, cats dancing, albeit not all as ugly as the Chubby TikTok craze [Wa24]!  You have almost certainly seen some AI generated cat images or videos, and they are often quite sweet, like cartoons emphasising the things we find appealing – large-eyed cuddly pets doing cute things.

This is not bad, it’s just frivolous.  And frivolous can be good, indeed fun is important for a full life and has been studied in HCI [BM18] including my own work on Christmas Crackers [Dx18]. We pay to go to the circus, watch a comedy film or buy a toy for a child.  But maybe there is a point when the sheer volume and cost of frivolity is excessive?

Coming next …

Part 2 – the impact of AI

The obvious impact of AI is in the things it does directly.  Some technologies also change the very nature of society, affecting even those who do not use them.  Cars are an obvious example.  AI is such a technology.

 

References

[AL16] Angwin, J., Larson, J., Mattu, S. and Kirchner, L. (2016). Machine bias there’s software used across the country to predict future criminals: And it’s biased against blacks. ProPublica (23 May 2016). https://www.propublica.org/article/machine-bias-risk-assessments-incriminal-sentencing.

[BM18] Mark Blythe and Andrew Monk (2018).  Funology 2: Critique, ideation and directions.” Funology 2: From Usability to Enjoyment. Cham: Springer.

[Bu21] Sarah Butler (2021). Uber facing new UK driver claims of racial discrimination. The Guardian, 6 Oct 2021. https://www.theguardian.com/technology/2021/oct/06/uber-facing-new-uk-driver-claims-of-racial-discrimination

[CH26] Osmond Chia and Silvano Hajid (2026). Malaysia and Indonesia block Musk’s Grok over explicit deepfakes. BBC News. 12 January 2026. https://www.bbc.co.uk/news/articles/cg7y10xm4x2o

[CC25] Clara Crivellaros, Lizzie Coles-Kemp, Alan Dix, and Ann Light (2025). Co-creating conditions for social justice in digital societies: modes of resistance in HCI collaborative endeavors and evolving socio-technical landscapes. ACM Transactions on Computer-Human Interaction. Vol. 32(2), Article No:15, pp.1–40  https://doi.org/10.1145/3711840

[Da21] Nicola Davis (2021).  From oximeters to AI, where bias in medical devices may lurk. The Guardian, 21 Nov 2021. https://www.theguardian.com/society/2021/nov/21/from-oximeters-to-ai-where-bias-in-medical-devices-may-lurk

[Dx92] A. Dix (1992).  Human issues in the use of pattern recognition techniques. In Neural Networks and Pattern Recognition in Human Computer Interaction Eds. R. Beale and J. Finlay. Ellis Horwood. 429-451.  https://alandix.com/academic/papers/neuro92/

[Dx18] A. Dix (2018). Deconstructing Experience: Pulling Crackers Apart. In: Blythe, M., Monk, A. (eds) Funology 2. Human–Computer Interaction Series. Springer, Cham. https://doi.org/10.1007/978-3-319-68213-6_29

[Dx26b] A. Dix. (2026). AI for Human–Computer Interaction. CRC Press. (in press). https://alandix.com/ai4hci/

[Gl21] Jessica Glenza (2021). Minneapolis poised to ban facial recognition for police use. The Guardian, 12 Feb 2021. https://www.theguardian.com/us-news/2021/feb/12/minneapolis-police-facial-recognition-software

[JE21]  Jumper, J., Evans, R., et al. Highly accurate protein structure prediction with AlphaFold. Nature 596, 583–589 (2021). https://doi.org/10.1038/s41586-021-03819-2

[LM16] Larson, J., Mattu, S., Kirchner, L. and Angwin, J. (2016). How we analyzed the COMPAS recidivism algorithm. ProPublica, 23 May 2016. https://www.propublica.org/article/how-weanalyzed-the-compas-recidivism-algorithm

[Ma21] Jyoti Madhusoodanan (2021). These apps say they can detect cancer. But are they only for white people?  The Guardian,  28 Aug 2021. https://www.theguardian.com/us-news/2021/aug/28/ai-apps-skin-cancer-algorithms-darker

[MC26] Liv McMahon and Laura Cress (2026). X could face UK ban over deepfakes, minister says. BBC News 9 January 2026. https://www.bbc.co.uk/news/articles/c99kn52nx9do

[NF24]  The Nobel Foundation (2024). The Nobel Prize in Chemistry 2024. NobelPrize.org. Nobel Prize Outreach 2025. Sat. 17 May 2025.  https://www.nobelprize.org/prizes/chemistry/2024/summary/

[Wa24] Aidan Walker (2024). The unstoppable rise of Chubby: Why TikTok’s AI-generated cat could be the future of the internet. BBC, 20th August 2024.  https://www.bbc.co.uk/future/article/20240819-why-these-ai-cat-videos-may-be-the-internets-future