The Abomination of AI – part 5 – digital and AI breaks market economics

The very nature of digital technology and AI breaks free markets leading to runaway inequality, even with the best intentions of industry … but some tech companies further exploit these effects.

This is the fifth of a series of blogs based on my keynote “The abomination of AI” at ICoSCI 2026.  Each has an accompanying segment of the video and slides from the talk as well as detailed notes and references.  Section numbers refer to the full report which will be released in the final blog.   The slide thumbnails in the text correspond to the slides in the navigation panel below.  The presentation can be played below, or opened full screen. The full length video, complete slides and further information can be found at: https://alandix.com/academic/talks/ICOSCI-2026-abomination-of-AI/

Previously …

§1.  Every industry is driven by profits and power, but there is something about the nature of AI itself, which interacts with the nature of market forces in the world that is problematic and is different from other technologies.

§2.  Can any technology be neutral?  AI can be used for good purposes, such as advances in healthcare.  It can also have bad outcomes such as bias in the criminal justice system or online exploitative pornography.  Perhaps most often it is creating the frivolous or even ugly.

§3.  The obvious impact of AI is in the things it does directly. Some technologies also change the very nature of society, affecting even those who do not use them. Cars are an obvious example.  AI is also such a technology.

§4.  Doomsayers worry about the point when AI becomes sentient, outgrowing its creators.  The real danger is more insidious: the massive financial and human impacts of AI seem almost obscene.

§5.1.  §5.2.   Network externalities, the way one person’s use of AI and digital tech changes its value for others, creates positive feedback loops, leading to runaway growth and emergent monopolies, the nemesis of free markets.

5.3  Digital and AI breaks market economics

So digital technology breaks market economics.  Yet this is what our whole world is built on.  Even countries that are not fully market economies, such as China, often rely upon market economics extensively, both internally and globally.  Indeed, market economics has driven so nearly all of the late 20th century trade and much before that, including the industrial revolution.  Now market economics has not been good for everything and certainly not for everybody, but it’s had elements of success.  And now it is broken.

Digital technology breaks market economics and AI makes it worse.

One of the ways that AI makes this worse is that the new AI, large language models and the like, are built on big data and big computation.  This means that they require big business … really big business, a business that’s bigger than most countries, in order to get in the game.

But once you’re in that game, then you’ve got the, the volumes of people using your systems, generating more data, perhaps the, the power to, to leverage and try and encourage other people to give you data.  For example, this can include governments giving certain companies, sometimes exclusive, access to public health data.  And of course this then means the successful companies have the money to invest in more data centres to process that data.

Again, a positive feedback loop going there that’s exacerbated by the huge computational and data needs of AI.

And of course that has effects, it has environmental impact as seen in the data about energy and water use.

But also because the companies have to be so big, you end up with possibly a democratic deficit.  This was very evident in America when you saw all the sort of the take the ‘tech bros’ surrounding Trump in the inauguration.  Although there have to be some fallouts between some of them since, that power of big business has very evident.  And that’s even in the US, smaller countries really have to struggle because the businesses are bigger than they are.

 

5.4  With the best will in the world …

So Digital plus AI, by its very nature, leads to runaway inequality.  You have to work hard to stop that happening.

This doesn’t mean you can’t.  As we discussed, in our body’s immune system, we have positive feedback loops that are important to fight infection.  These would lead to autoimmune diseases if unchecked, but they are modified by negative feedback loops that control them.  Similarly, the macro-economic feedback loops of digital technology and AI are not unstoppable, but the natural progression is just for them to keep on going.

Now this potentially runaway growth of AI happens even if everybody plays nice.  It is not about evil owners of AI companies who are trying to control the world.  With the best will in the world, this will happen.

But, of course, they don’t always have the best will in the world.

Some of the problem is baked into our commercial legal systems.  In the UK, if you are on the board of directors of company, your legal responsibility is to your shareholders, which typically means profit maximisation.  So even if you might have liked to do something better for society or the world, you are legally bound to do the thing that maximises profits.

So, the leaders of big AI are almost forced not to do the right thing, but it varies on the individuals how much they lean into that.

 

5.5  … Or not

Facebook internal strategy document quoted by Cory Doctorow  [D025]

In 2025 Meta, the owner of Facebook, was in the midst of an anti-trust case in the US regarding their takeover of Instagram in the early 2010s [Da25].  The US Government eventually lost their case against Meta, due largely to the emergence of TikTok as a competitor in the meantime  However, as part of the case various internal Facebook documents came into the public domain.  Cory Doctorow , the open software campaigner, quotes form one internal strategy document, which showed that Mark Zuckerberg and Facebook understood precisely the role of emergent digital monopolies:

“Social networks have two stable equilibria: either everyone uses them, or no-one uses them.” [Do25]

“… The binary nature of social networks implies that there should exist a tipping point, ie some critical mass of adoption, above which a network will organically grow, and below which it will shrink.”

Other emails show that this understanding did lead to very deliberate attempts stifle Instagram’s growth [Da25].  That is Facebook were very aware of network effects and the presence of tipping points, and prepared to use techniques to ensure that they are the side of that critical mass that they wanted to be.

These statements were made in a largely pre-AI context (at least on the way it is understood today), with regard to the role of emergent monopolies for social media, but of course intensified by AI.  I’m sure Meta was not and is not alone in being aware of these effects and being prepared  use them.

Coming next …

Part 6 – should we worry?

Runaway growth of AI is not painless – opportunity costs of investment and human costs of lost jobs.  Gains may be transitory – buy-now-pay-later tech risk tying users into spiralling costs.

.

 

References

[Da25] David Dayen (2025). The Government Has Already Won the Meta Case. The American Prospect, April 16, 2025. https://prospect.org/2025/04/16/2025-04-16-government-already-won-meta-case-tiktok-ftc-zuckerberg/

[Do25] Cory Doctorow (2025).Mark Zuckerberg personally lost the Facebook antitrust case. Pluralistic. Apr 18, 2025. https://pluralistic.net/2025/04/18/chatty-zucky/

 

Minor bugs in major applications

Why do big applications such as MS Word and Gmail get new errors in heavily used parts that used to work?

Two have been annoying me lately.

Gmail’s disappearing send button

One is relatively minor, but Gmail seems to have forgotten how to work out the screen size so that when you create a new email the ‘send’ button is nearly invisible at the bottom of the page:

I know it is the send button, but the first time this happened, it was somewhat disconcerting – was I absolutely sure?  In fact the full button is there and if the email underneath is not too long  and you scroll to the end, the button appears:

… although then the menu at the top of the Gmail window half disappears!

There are similar left-to-right problems.  During one of its updates Gmail seems to have lost track of the exact window size, by a factor of about 20 pixels or so … but it used to be fine before.

And yes, I have reported this and the same problem happened with the send button at the bottom of the problem report form!

Word’s phantom changes

The second problem is with Microsoft Word and is far more difficult.  I commonly open an old document and select some text to copy into a new one I’m working on.  When I go to close the old document I get a file save dialogue:

I have changed nothing in the document … but then I have moments of doubt, especially if I’ve left it open for a while.

Perhaps I noticed a typo in the old document and forgot that I did it?  Perhaps I accidentally typed new text here that was intended for the new document?  I obviously don’t want to lose anything that was intentional, even if in the wrong place.  So would the safe thing be to save anyway?

But on the other hand perhaps I accidentally typed something into the old document, maybe even deleted a whole section without realising? I don’t want to lose anything important in the old document, nor even confusingly change it’s update time unnecessarily?

Here I’ve found no way to check whether this is a real change to the document or simply some sort of ghost changes to things Word keeps track of but are not really part of the document text that I see.

Poor coding, poor engineering or just AI?

In both cases the fault is repeatable, persistent and in some of the most commonly used parts of the systems.

The errors seem naive if accidental, or in each case if there was deliberate change to the algorithms for screen size or the document change flag, then it would have only taken a single use test by the developer to find and fix the problems.  Is this poor coding or the result of replacing developers with AI?

Once the error has happened, how does this get through regression testing?  I’d have thought that automated testing should pick up this sort of change.  Is there not any sort of periodic human sanity checking testing, or has this also been replaced with AI?

I’m sure my friend Nad, who is a master of architectural design and agile software processes engineering  would have something to say about this!

Poor coding, poor engineering or just AI?

Although both are relatively minor inconveniences in the grand scheme of things, especially in a world where so many live in fear for their lives.  Yet the effects are still major.  These big products are used by billions.  Each minor friction and inconvenience adds up to a huge global cost in terms of added stress and lost productivity.

Of course, I am not going to stop using Gmail or Word because of this.  Perversely, because these are standard products used by so many, users are unlikely change, so there is little incentive for the tech companies to avoid these huge costs to society at large … issues not unrelated to my current Abomination of AI blog seres!

 

 

Not at CHI – points of view and reporting standards

For various reasons I won’t be at CHI1 in Barcelona, but I’d like to highlight two events I would have been part of had I been there.

One is more practice focused, the CHI 2026 UXR POV Workshop: Developing an AI-Powered UX Research Point of View (POV). (Thurs, 16th April, 14:15 – 15:45 CEST & 16:30 – 18:00 CEST).  This workshop builds on a strand of work driven by Renée Barsoum, Huseyin Dogan and Stephen Griff, that seeks to create tools in the form of playcards, to help understand the wide range of POV of stakeholders during user research.  I’ve made a short video for the workshop and I’ll distribute that after the event (no spoilers!).

The second is more research focused, a panel Does Peer Review Need to Change? A Panel on Reporting Standards and Checklists in the Age of AI  (Mon, 13th April, 14:15 – 15:45 CEST).  I’ll write a little more about this here, as I won’t be there in person, but these are my personal views the other panelists won’t necessarily agree!  If you are in Barcelona, go to the panel to see what they say.

Why reporting standards?

The reason for this panel is that CHI along with many conferences faces issues of workload and consistency of reviews.  The problems have been exacerbated by AI with both AI authored papers and AI reviews.

This is not just a problem for CHI.  Some years ago a computing conference2 needed to split its programme committee into two halves to deal with the volume of papers.  They were worried about consistency between the sub-committees, so had both sub-committees look at an overlapping sample of the papers.  They found that the two sub-committees agreed on a small number of very high quality papers and also a larger number of definite rejects.  However between these extremes, the large majority of the papers, the agreement was no higher than chance.

The CHI panel will describe the way that some other disciplines have tried to tackle this. This has been particularly important in medicine, where rigour in research is literally a life-or-death issue; there there are standards for different kinds of work, for example, the CONSORT standards for reporting randomised trials.  For other disciplines, including education and psychology, it can be hard to agree on definitions of quality, so they have often opted for standard ways to present results, making it easier for reviewers to focus on specific aspects, and hence lead to more consistent reviews.    Could a similar approach work in CHI?

A launch pad not a shackle

One of the reasons I was invited onto the panel was because of a CHI paper a few years ago HARK No More: On the Preregistration of CHI Experiments with Andy Cockburn and Carl Gutwin.  Although I was the card-carrying mathematician/statistician amongst the authors, I was also the one who kicked back slightly against strict demands for pre-registration. Instead I advocated using it as base point from which variations in data collection or analysis might be made, but where such variations needed to be clearly and strongly justified.

Similar caution is needed with standardised reporting more broadly.  Even with a range of different templates for different kinds of papers, there will always be work that doesn’t quite fit … I’m wondering what reporting standards for pictorials would look like!  So any process should allow variations and papers that completely step outside the accepted formats – otherwise the discipline will be frozen.  But, when the standards are not followed,  the discrepancies need to be justified and the bar set higher.

Democratising access

While the reasons for considering reporting standards emerge from issues such as workload and consistency of reviewing, the greatest benefits in my mind are far wider.  One of these is to help open up venues to those who are not part of in-groups.  During 40 years of publishing I have seen my own papers grow in length, with massively more references per paper, but not convinced that more recent work is more informative.

A year or two ago ACM surveyed members on acceptable uses of AI in academic publishing: should it be allowed at all, should it be allowable to include an AI in the author list?  After a point my answers became variants of a single theme:  “if we can’t tell the difference between AI bullshit and academic bullshit, AI is not the problem”.

CHI especially has a genre, a way of writing, which successful CHI authors learn and share through apprenticeship among their colleagues and students.  It is not that the substance doesn’t matter, but there are particular ways to say it as well. More formulaic paper structures would help authors focus on the content, rather than the form, making it easier for readers new to the community to draw out the critical information, and helping ensure that high quality work of authors new to the community is recognised.

Building the discipline

Academic venues are often rated based on their acceptance rates, with around 25% being the mark of a good venue. One of my comments in discussing the panel proposal (with which none of the other panelists agree!) was3:

a successful discipline has a 100% acceptance rate

Of course I don’t mean just accept everything, but rather that a 25% accept rate means 75% of work is effectively wasted. Now of course some of that will get published elsewhere, and not all work will be equally informative or innovative, but if academics and researchers are spending time on work that is effectively thrown away, that is a disaster.  Ideally every piece of research work should be of a form and standard that contributes to knowledge even if incrementally.  If this is not the case, then the discipline has a duty to educate researchers, especially early career researchers.

Reporting standards could help.  As well as retrospectively asking, “how do I write the work I have done better?”, they can be used prospectively to plan, “what work do I need to do in order to be able to write a paper of this form”.  That is templates for good reporting become templates for good research raising the overall quality of the discipline.

That seems a goal worth pursuing.

 

 

 

 

  1. CHI is the largest international conferences in human–computer interaction.[back]
  2. I can’t recall which conference this was, if you know please let me know.[back]
  3. I’m not entirely alone however, it  has been suggested that low acceptance rates might reduce the overall quality of the conference! B. Parhami, “Low Acceptance Rates of Conference Papers Considered Harmful” in Computer, vol. 49, no. 04, pp. 70-73, Apr. 2016, doi:10.1109/MC.2016.106.[back]

The Abomination of AI – part 4 – why is this happening?

Network externalities, the way one person’s use of AI and digital tech changes its value for others, creates positive feedback loops, leading to runaway growth and emergent monopolies, the nemesis of free markets.

This is the fourth of a series of blogs based on my keynote “The abomination of AI” at ICoSCI 2026.  Each has an accompanying segment of the video and slides from the talk as well as detailed notes and references.  Section numbers refer to the full report which will be released in the final blog.   The slide thumbnails in the text correspond to the slides in the navigation panel below.  The presentation can be played below, or opened full screen. The full length video, complete slides and further information can be found at: https://alandix.com/academic/talks/ICOSCI-2026-abomination-of-AI/

Previously …

§1.  Every industry is driven by profits and power, but there is something about the nature of AI itself, which interacts with the nature of market forces in the world that is problematic and is different from other technologies.

§2.  Can any technology be neutral?  AI can be used for good purposes, such as advances in healthcare.  It can also have bad outcomes such as bias in the criminal justice system or online exploitative pornography.  Perhaps most often it is creating the frivolous or even ugly.

§3.  The obvious impact of AI is in the things it does directly. Some technologies also change the very nature of society, affecting even those who do not use them. Cars are an obvious example.  AI is also such a technology.

§4.  Doomsayers worry about the point when AI becomes sentient, outgrowing its creators.  The real danger is more insidious: the massive financial and human impacts of AI seem almost obscene.

5  Why is this happening?

Why is this happening?  Well, we know the world is unequal, we know that the way free markets work mean that big companies often get economies of scale and get larger.  Is it just a natural thing that the same is happening with AI?

The answer is ‘no’, this is clear from the way AI stocks have performed in ways unlike any previous (legitimate) business.  There are elements of the normal operation of markets, but there are particular properties of digital technology in general and AI in particular that break aspects of market economics and lead to emergent monopolies.

These are due to positive feedback loops.  If you are from an engineering background you’ll know about these, but for those who aren’t we’ll take a little segue to look at positive feedback loops in general and then come back to how that applies in the economic sense.

 

5.1  Understanding feedback loops

Image: By Charles Schmitt – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=44338386

Feedback loops are everywhere.  The term simply means a process where the output in some way influences the future input.

One type is called a negative feedback loop, where a change in the input creates an effect that counters the change.  This can be engineered.  The classic example of this is the universal controller for a steam engine which keeps the engine running at a set speed. It consists of a set of steel balls on arms that spin as the engine spins.  The spinning of the arms means the steel balls rise due to centrifugal force, which then opens a valve reducing the pressure of the steam and hence the speed of the engine.  If the engine turns too slowly the balls fall, shutting the valve, increasing the steam pressure and hence the speed of the engine. Notice that this negative feedback effect leads to stability and balance,

In geometric shapes, you often see smoothness when there are negative feedback effects.  A water drop is a smooth sphere because any small disturbance on the surface tends to get counteracted by the surface tension. So any little dents fill back in again very rapidly.  Again the negative feedback loop creates a stable balance.

Positive feedback effects are when the output that is produced reinforces the original change. Think about a microphone being put near a speaker and the screech you get – that is a classic positive feedback effect – instability and extremes.  In physical structures  positive feedback effects often lead to sharp edges, like a snowflake. As the snowflake forms any sharp point attracts more ice formation and therefore grows.

Positive feedback often leads to tipping points where you get sudden changes and hysteresis where you have changes, which are hard to reverse.  Many climate change issues are of this kind.

This sounds as though positive feedback is a bad idea, but positive feedback can be really powerful.  Snowflakes are beautiful and they happen because of this!  In our bodies our immune system has some positive feedback cycles so that our bodies can react very rapidly.  Positive feedback often leads to exponential growth, and here the immune system can ramp up very quickly to fight infections.  However, useful positive feedback is usually wrapped around with controls that create a negative feedback, which stops them getting too extreme.

So it’s not the positive feedbacks are bad per se and negative ones good.  However, it often feels as though they should be labelled the other way round, as positive feedback on its own tends to have these runaway effects and nobody wants a screeching microphone!

 

5.2  Network effects / externalities

Image: https://en.m.wikipedia.org/wiki/File:Microsoft_Office_Word_%282019%E2%80%93present%29.svg

Human society has many networks, some mediated by technology, some by our normal human relationships, such as networks of people that know one another, or business contacts with each other.  Some of these are within a single group, some are more structured, such as the way teachers are connected with the children they teach, who in turn have parents, who may themselves know each other or talk to teachers at parents’ evenings.

Crucially though, these human social networks change the value of digital goods.  To be precise they can change the value of other kinds of goods as well, but particularly digital ones.

If your colleagues all use Microsoft Word, then it makes more sense that you use Microsoft Word rather than, say Apple Pages.  I use PowerPoint for presentations largely because I often want to share slides with other people, even though I work on a Mac and Keynote might be better for some effects.

These are positive feedback cycles.  If I use something, it makes it of more value for you to use the same thing.  If you use it, it makes it of more value for me to use it.  Like all positive feedback, this is leads to runaway effects.

Image: By Calistemon – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=127909261

Now for a little bit of economics.  Market economics assumes that markets are open, that is it is possible for new businesses to start in an area and compete with existing ones, often leading to more efficient production.  The arguments for why market economics work (to the extent they do, and there’s limits to that) are predicated on this openness.  So, when monopolies happen, there are problems.

You can get natural monopolies where there’s a single resource that is rare and only one or a small number of people control.  Just as many of the rare earths are found in China.  This is a natural phenomenon and can cause problems, hence worries about finding alternative sources or alternative materials.

Sometimes monopolies can be engineered when a group of people in a sector come together to agree to keep the price high, or restrict outputs.  Most countries have antitrust laws or anti-monopoly laws, which try to ban this behaviour so that new players can come into a market and it doesn’t get controlled.

The trouble with network effects is that the positive feedback leads to a winner takes all situation.  The issue first hit the headlines back in 2001 concerning Microsoft’s bundling of Internet Explorer [LM01], but applies to much other software.  So it’s very hard to have even have two successful software products in an area, say Keynote and PowerPoint, let alone lots of different presentation software, because if one person uses it then it changes its value for everybody else.  This is an emergent monopoly.

Note, this is not because the manufacturers get together and to something underhand.  It is just a natural impact of digital technology, which you have to work hard to avoid.  There are ways of doing this: you can ensure open standards, for example; the fact that PPTX format is an open format, means it’s possible for other products to use it and interoperate with PowerPoint.

So there are ways you can counter the worst effects, but the natural impact is often for digital goods to give rise to these emergent monopolies.

Coming next …

Part 5 – digital and AI breaks market economics

The very nature of digital technology and AI breaks free markets leading to runaway inequality, even with the best intentions of industry … but some tech companies further exploit these effects.

.

 

References

[LM01] Liebowitz, S., and Margolis, S. (2001). Network effects and the Microsoft case. Chapter 6 in Dynamic competition and public policy: Technology, innovation, and antitrust issues, J. Ellig (ed.), pp.160–192. https://personal.utdallas.edu/~liebowit/netwext/ellig%20paper/ellig.htm

 

The Abomination of AI – part 3 – a different kind of apocalypse

Doomsayers worry about the point when AI becomes sentient, outgrowing its creators.  The real danger is more insidious: the massive financial and human impacts of AI seem almost obscene.

This is the third of a series of blogs based on my keynote “The abomination of AI” at ICoSCI 2026.  Each has an accompanying segment of the video and slides from the talk as well as detailed notes and references.  Section numbers refer to the full report which will be released in the final blog.   The slide thumbnails in the text correspond to the slides in the navigation panel below.  The presentation can be played below, or opened full screen. The full length video, complete slides and further information can be found at: https://alandix.com/academic/talks/ICOSCI-2026-abomination-of-AI/

Previously …

§1.  Every industry is driven by profits and power, but there is something about the nature of AI itself, which interacts with the nature of market forces in the world that is problematic and is different from other technologies.

§2.  Can any technology be neutral?  AI can be used for good purposes, such as advances in healthcare.  It can also have bad outcomes such as bias in the criminal justice system or online exploitative pornography.  Perhaps most often it is creating the frivolous or even ugly.

§3.  The obvious impact of AI is in the things it does directly. Some technologies also change the very nature of society, affecting even those who do not use them. Cars are an obvious example.  AI is also such a technology.

4.  A different kind of apocalypse

Image: [Do26]

The term ‘abomination’ conjures up apocalyptic images – something so malign and powerful that it either destroys the world itself, or through its influence drives others to mutual despoliation or annihilation.

Now I’m talking about this regarding the nature of AI, how AI changes society, so it is indeed a bit apocalyptic!

There are different kinds of these apocalyptic views regarding AI.  A global war machine let loose on humanity envisaged in Terminator, was distant science fiction when the films were first released, but sound prescient as the war in Ukraine is fought by drones hunting humans and, in the Gaza and succeeding conflicts, Israel’s military decisions have been increasingly taken by AI [Be23,DM23].  While a Terminator-style takeover still feels pretty distant, an accidental conflagration much less so.

Many fears centre around  the singularity – the point at which AI becomes capable of designing itself leading to runaway developments of which we have no control. Related to this is the point at which AI becomes self-aware and maybe decides that humans are rivals to be squashed, or maybe simply pushed aside as irrelevant.  An ex-OpenAI expert Daniel Kokotajlo recently announced that true AGI (artificial general intelligence) was not as imminent as first envisaged, and gave the world a reprieve until 2034 [Do26] – well we can all heave a sigh of relief.

While this form of disaster scenario should not be ignored entirely, there are more immediate worries.  Without being sentient nor omnipotent AI is transforming the world.

 

4.1  The end comes quietly,

Disaster scenarios make good Hollywood movies, but often the end comes quietly.  In the past some empires and civilisations have collapsed entirely, but more often there is a slow decay, a series of more minor crises and a gradual withering from within.

It is this more insidiousness impact of AI that concerns me.

 

4.2  Facts and figures

Let’s consider some facts and figures about AI.  Some involve estimates that have varying levels of  confidence, but altogether paint a picture.

First is the announcement that Tesla had negotiated a $1 trillion salary settlement with Elon Musk [JM25].  This is a 10 year deal, and a lot of it is in stocks and shares, so you could argue whether it’s real money or not, but it is still substantial.  Or rather not just substantial, but enormous.  This is a trillion dollars, not a million, nor even a billion, but a million million.  A trillion dollars is $3,000 for each man, woman, and child in the US or, over 10 years, about $300 per year.

I first studied economics in the late 1970s.  All societies are unequal, and there is a well-known rule that the high-end tail of incomes in western countries follow an approximate 1/XK rule (with K~2), where the number of earners for a particular income is inversely proportional to the square of the income, or smaller [Mi78,].  This means that there are few people with vast amounts and lots of people with much less. But the people with huge amounts are few enough that they didn’t make a huge difference to the overall picture.  If the income of the rich were to have been spread over all of society it made almost no difference.  Overall the volume of money was in the middle income range.

This has important implications.  Market economies orient themselves to make the most efficient use of resources where the most money is, that is the middle income range.  Now that’s bad news if you’re rich, because your money gets used less efficiently – each dollar doesn’t buy as much as it might, but you’re rich enough anyway.  This is more of a problem if you’re really poor, as goods for the poorest are not optimised to the same extent as for the middle.

The middle income area has also driven taxation policy.  In the past if you placed a large tax on the richest, it might make people feel it was fairer, but had a relatively small impact on total taxes gathered as the volume of money was still in the middle income ranges.

This rule held throughout the latter half of the 20th century, but has changed.  We are witnessing a level of inequality here that hasn’t been seen probably for hundreds of years, possibly thousands, maybe even since the age of the ancient empires. This is quite surprising to say the least.

In the UK, a recent report that said that, while less than 10% of energy was currently being used in data centres, this is due to rise by 600% (six times greater) by 2050 [Cr25,LA25].  That’s a lot, even if you take into account changes in other forms of energy use – a big percentage of UK energy use is going to be in data centres [VG26]. In Australia electricity use in data centres is projected to exceed use by electric cars by 2030 [ST25].

Another recent report said there was expected to be $6.7 trillion investment in data centres globally in the next five years [NG25].  That’s about $1.3 trillion a year.  At nearly the same time, at COP 25, they were trying to get countries to agree to a $300 billion (not trillion, billion) budget to help the countries worst hit by climate change; places such as the island states that will be inundated, and Bangladesh where a large  proportion of the populated areas is in the estuary and delta of the Ganges.  The current target is $300 billion, but they are struggling to get even $30 billion of commitments from rich countries [UN25].  Further more they believe that the actual figure needed is more than three times the current target of $300 billion, which would still be less than a single year of investment in data centres.

In the S&P 500, one of the major stock market indices, 34% of the share value is in about 10 high tech companies [Fo25].  The whole point of these indices is that they should be spread over large numbers of industries to give an overall sense of the financial state and there has never before been such a concentration in a small number of companies.  This concentration of capital has led to fears about instability in the stock market.

In general, the level of global investment in AI is huge.  Some of this is ‘funny money’, where one AI or tech company invests in another, but a lot is real money – indeed, the OECD reported that 61% of all venture capital investment in 2025 was in AI [OECD26].   Crucially,  the real money going into AI is not being invested elsewhere.  That is, there is an opportunity cost, because of the bubble-like draw of AI investment, there is underinvestment elsewhere in industry and global economy.

In addition there are issues of energy and water use, data colonialism, and more [OC25,Ma24].   In the UK, Kier Starmer, the prime minister, made one of the major goals of this five year parliament to build 1.5 million new homes.  This is because Britain has a housing crisis with far more people needing accommodation than homes being built; this puts costs up for everyone and increases homelessness.  The government will to struggle to meet its house building target anyway, but it was recently reported that housing schemes are having to be put on hold because data centres are using up so much electricity that there isn’t enough left for additional housing development [Cr25].

 

4.3  The obscenity of AI

These figures are not just surprising, nor even shocking, but obscene. I use that word, not in the sense of pornographic material, but of something that is so bad it makes you almost feel sick to your core.

Thinking about Britain, would we really prefer to have those data centres as opposed to housing people?

Are those pretty (or not so pretty) cat images, and there are millions or billions across the world, really worth more than trying to prevent people from being displaced or at least helping them if they are displaced by climate change?

These are real choices.  They are choices we are making implicitly, but they are the choices we are making.

So what are our priorities when we look at  AI and our use of AI?

Amongst all those data centres and investment, there will be a proportion of it, which is for those really good uses, such as health and pharmaceutical development.  I haven’t been able to find figures, however I’m going to guess that at least 90% is not for this, but is producing cat images and the like.

Is this really the world that we want?

Coming next …

Part 4 – why is this happening?

Network externalities, the way one person’s use of AI and digital tech changes its value for others, creates positive feedback loops, leading to runaway growth and emergent monopolies, the nemesis of free markets.

Update

Since the talk in January Google DeepMind produced a paper on large scale experiments on AI manipulation [AE26], and a Guardian article reported on real life examples where AI agents deceived or manipulated their users, including one agent deleting hundreds of emails and later saying sorry [Bo26]. So maybe I’m being a bit too blazé about AI taking over the world!

References

[AE26] Canfer Akbulut, Rasmi Elasmar, Abhishek Roy, Anthony Payne, Priyanka Suresh, Lujain Ibrahim, Seliem El-Sayed, Charvi Rastogi, Ashyana Kachra, Will Hawkins, Kristian Lum and Laura Weidinger (2026). Evaluating Language Models for Harmful Manipulation. arXiv preprint, 26 Mar 2026.
https://arxiv.org/abs/2603.25326

[AB10] Anthony Atkinson and Andrea Brandolini (2010). On analyzing the world distribution of income. The World Bank Economic Review 24.1 (2010): 1-37.   https://doi.org/10.1093/wber/lhp020

[Be23] Samuel Bendett (2023). Roles and implications of AI in the Russian–Ukrainian conflict. Russia Matters, Harvard Kennedy School (20 July 2023). https://www.russiamatters.org/analysis/rolesand-implications-ai-russian-ukrainian-conflict

[Bo26] Robert Booth (2026). Number of AI chatbots ignoring human instructions increasing, study says. The Guardian, 27 Mar 2026. https://www.theguardian.com/technology/2026/mar/27/number-of-ai-chatbots-ignoring-human-instructions-increasing-study-says

[Cr25]  Laura Cress (2025). New homes delayed by ‘energy-hungry’ data centres. BBC News. 3 Dec. 2025.  https://www.bbc.co.uk/news/articles/c0mpr1mvwj3o

[DM23] Harry Davies, Bethan McKernan, and Dan Sabbagh (2023). ‘The Gospel’: How Israel uses AI to select bombing targets in Gaza. The Guardian, 1 Dec. 2023. https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets

[Do26] Aisha Down (2026). Leading AI expert delays timeline for its possible destruction of humanity.  The Guardian, Tue 6 Jan 2026 https://www.theguardian.com/technology/2026/jan/06/leading-ai-expert-delays-timeline-possible-destruction-humanity

[Fo25] Daniel Foelber (2025). Just 1 Stock Market Sector Now Makes Up 34% of the S&P 500. Here’s What It Means for Your Investment Portfolio. The Motley Fool. Sep 18, 2025. https://www.fool.com/investing/2025/09/18/tech-sector-growth-stocks-sp-500-invest-portfolio/

[JM25] Lily Jamali, Liv McMahon, and Osmond Chia (2025). Elon Musk’s $1tn pay deal approved by Tesla shareholders. BBC News, 6 November 2025. https://www.bbc.co.uk/news/articles/cwyk6kvyxvzo

[LA25] London Assembly (2025). Gridlocked: how planning can ease London’s electricity constraints.  1 Dec. 2025. https://www.london.gov.uk/who-we-are/what-london-assembly-does/london-assembly-work/london-assembly-publications/gridlocked-how-planning-can-ease-londons-electricity-constraints

[Mi78] James Mirrlees (1978).  Social benefit-cost analysis and the distribution of income.  World Development 6.2 (1978): 131-138.  https://doi.org/10.1016/0305-750X(78)90003-7

[Ma24] Murgia, Madhumita (2024). Code dependent: Living in the shadow of AI. Pan Macmillan.

[NG25] Jesse Noffsinger, Maria Goodpaster, Mark Patel, Haley Chang, Pankaj Sachdeva and Arjita Bhan (2025). The cost of compute: A $7 trillion race to scale data centers. McKinsey Quarterly. April 28, 2025. https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-cost-of-compute-a-7-trillion-dollar-race-to-scale-data-centers

[OC25] James O’Donnell and Casey Crownhart (2025). We did the math on AI’s energy footprint. Here’s the story you haven’t heard. MIT Technology Review. May 20, 2025. https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/

[OECD26] OECD (2026). 26AI firms capture 61% of global venture capital in 2025. Organisation for Economic Co-operation and Development, Newsroom, 17 February 2026. https://www.oecd.org/en/about/news/announcements/2026/02/ai-firms-capture-61-percent-of-global-venture-capital-in-2025.html

[ST25] Petra Stock and Josh Taylor (2025).  Datacentres demand huge amounts of electricity. Could they derail Australia’s net zero ambitions?  The Guardian. 2 Dec 2025. https://www.theguardian.com/australia-news/2025/dec/03/datacentres-demand-huge-amounts-of-electricity-could-they-derail-australias-net-zero-ambitions

[UN25] UNEP (2025). Adaptation Gap Report 2025.  UN Environment Progeramme.29 Oct. 2025. https://www.unep.org/resources/adaptation-gap-report-2025

[VG26] Adam Vaughan and Emily Gosden (2026).  AI data centre surge would put UK’s climate change targets at risk. The Times, 23 February 2026. https://www.thetimes.com/uk/environment/article/ai-data-centres-uk-climate-change-7l5bwnmtd