Workplace Insights by Adrie van der Luijt

When the music stops

What years of failed IT projects taught me about the AI bubble

A senior government content designer who witnessed Universal Credit and Rural Payments catastrophes explains why the AI bubble will burst the same way every failed IT project does.

I’ve spent fifteen years designing content for government digital services, having spent the previous decade in the private sector. I’ve watched billions of pounds burn on IT projects that promised revolution and delivered chaos. And when I read the latest breathless defences of AI valuations, particularly Jason Snyder’s recent Forbes piece arguing this isn’t a bubble because “thermodynamics”, I recognise the rhetoric immediately.

It’s the same language vendors used to sell Universal Credit, Rural Payments, Smartlandlord and every other catastrophic IT failure I’ve witnessed. The infrastructure is real. The technology is revolutionary. This time it’s different. The spending is justified because the transformation is fundamental.

Except when you’re sitting in a Warrington job centre watching staff handwrite claimant details because a £1 billion IT system can’t save basic information, the rhetoric rings hollow.

The AI bubble isn’t a debate anymore

Recent market turbulence has made the question unavoidable. Markets dropped for four straight days in mid-November 2025, with growing concerns about sky-high valuations for tech and AI companies giving investors flashbacks to the late 1990s internet bubble.

The numbers are staggering. AI-related stocks have accounted for 75% of S&P 500 returns, 80% of earnings growth and 90% of capital spending growth since ChatGPT launched in November 2022. When three-quarters of market gains come from a single technology bet, that’s not diversification, but desperation.

Even the executives building AI systems are hedging. Google CEO Sundar Pichai recently warned that no company would be immune if the AI bubble were to burst. When the people profiting from the hype start using words like “irrational”, it’s time to pay attention.

Michael Burry, the investor who predicted the 2008 housing collapse, has placed a $1 billion bet against AI, claiming Big Tech’s profits are built on accounting fraud through manipulated depreciation schedules. His specific accusation? Companies are stretching the useful life of AI hardware from three years to six, artificially inflating earnings by delaying costs.

The “this time it’s different” defence

Jason Snyder’s Forbes article defending AI valuations makes sophisticated arguments. He’s right that AI infrastructure is physical and real. He’s right that energy consumption is genuine. He’s correct that previous infrastructure revolutions, such as railroads, electricity and telecommunications, all looked irrational before they proved essential.

What he misses is that infrastructure being real doesn’t validate the valuations.

The railroads transformed civilisation. Most railroad companies went bankrupt. The dot-com bubble financed crucial internet infrastructure. Thousands of companies failed anyway. Real infrastructure, transformative technology and catastrophic financial losses aren’t mutually exclusive. They’re historically linked.

Snyder argues we’re “betting against thermodynamics” if we short AI stocks. But thermodynamics doesn’t care about share prices. The laws of physics don’t justify OpenAI’s $500 billion valuation when it’s losing $12 billion per quarter whilst generating $13 billion in annual revenue.

His most telling argument is about energy: “Every model, every inference, every output we describe as intelligent is ultimately a transformation of electricity into structured probability.” True. But energy consumption is a cost, not a business model. If anything, AI’s massive energy requirements highlight why the economics don’t work. These systems are phenomenally expensive to run, whilst most companies can’t extract proportional value from them.

The circular economy that isn’t an economy

OpenAI is taking a 10% stake in AMD, whilst Nvidia is investing $100 billion in OpenAI. OpenAI counts Microsoft as a major shareholder, whilst Microsoft is also a major customer of CoreWeave, in which Nvidia holds equity. Money circulates between the same companies, creating the appearance of a thriving ecosystem, whilst no one’s actually turning a profit.

OpenAI is committed to investing $300 billion with Oracle over the next five years, averaging $60 billion annually, whilst losing billions and generating just $13 billion in projected 2025 revenues. When the deal was announced, Oracle’s shares jumped 40%, adding nearly a third of a trillion dollars to its market value in a single day.

This isn’t market confidence. It’s musical chairs.

What government IT failures taught me

I’ve seen this before. Not at this scale, obviously. Government procurement operates with mere billions rather than hundreds of billions. But the patterns are identical.

Universal Credit: when the system forgot everyone

During the Warrington pilot of Universal Credit, the system was so fundamentally broken that it couldn’t save claimant details. Staff handwrote information to enter later. This was a £1 billion IT contract that couldn’t perform the most basic database function.

The project was described to me by one senior civil servant as “like a cake with three slices: the content slice is delicious, the policy slice is good and the IT slice is completely missing.”

Rural Payments Agency: eight users is capacity

The Rural Payments Agency built a system intended for every farmer and landowner in the UK. Their servers hit 100% capacity when eight people logged in simultaneously.

Eight people.

The project was so catastrophically mismanaged that it led to a full public inquiry. The vendors had promised revolutionary efficiency. They delivered a system that couldn’t handle a village cricket club’s membership database.

Smartlandlord: when IT insists on SharePoint

At Smartlandlord, the head of IT insisted on using SharePoint as the central system for a complex project that combined insurance quotes, tenant referencing and other features. A contractor quoted £250,000 to implement it properly. IT talked them down to £65,000. Then they had to pay extra for every change, as the cobbled-together system predictably started failing.

The bank eventually pulled funding because the whole project had become unstable. Not because the business model was wrong. Not because we weren’t making great headlines in the Financial Times, Property Week and other media. Not because there was no market. Because IT made dogmatic technology choices, underinvested in proper implementation, then watched it collapse.

The pattern: technology dogma over delivery

Every failed project I’ve worked on shared common characteristics:

Vendors promised revolution. The technology would transform everything. This wasn’t just a new system, but a paradigm shift. The old ways were obsolete. Anyone questioning the vision was a Luddite.

Executives believed the demos. They saw polished presentations showing perfect workflows. They didn’t ask what happens when data is messy, users behave unpredictably or the edge cases emerge. They fell for the theatre.

IT made technology choices based on faith rather than evidence. They picked technologies because they were fashionable, or because the vendor had good relationships or because it looked impressive on LinkedIn. Not because those technologies were right for the actual requirements.

When problems emerged, the response was always more money and more time. The system doesn’t work? That’s because we haven’t invested enough. Give it another year. Add another £100 million. The technology is sound; we just need to push through.

The people who raised concerns were dismissed as not understanding the technology. If you questioned whether the system could actually deliver, you clearly didn’t grasp how transformative it would be. You were thinking small. You lacked vision.

Every single one of these patterns is visible in AI right now.

MIT found 95% of enterprise AI pilots fail

A recent MIT study revealed that only 5% of custom AI tools survive the pilot-to-production transition, whilst generic chatbots achieve 83% adoption for trivial tasks but stall when workflows demand context and customisation.

Companies are spending billions on AI initiatives that deliver no measurable value. Not because AI doesn’t work, but because organisations are trying to bolt sophisticated technology onto broken processes, unclear requirements and unrealistic expectations.

The study concluded that without governance, memory, and genuine workflow redesign, AI pilots are just expensive theatre. Exactly like government IT demonstrations that looked brilliant until they met real users.

The energy argument is backwards

Snyder argues that AI’s energy consumption proves it’s not a bubble, that we’re investing in fundamental infrastructure. But energy consumption without corresponding value creation is just waste.

Peter Andersen, founder of Andersen Capital Management, noted concerns about “how much is going to be spent, are we going to see results, how much energy it’s going to cost”. The question isn’t whether the energy is real. It’s whether the economic value justifies the energy cost.

Nuclear power plants are real infrastructure. That doesn’t mean every nuclear construction project is financially viable or that uranium mining companies deserve infinite multiples.

What happens when the music stops

Morgan Stanley’s CEO warned in November 2025 of a potential 10-15% equity market drawdown ahead, with concentration risk in tech stocks creating the possibility of steeper declines.

Three scenarios seem possible:

The optimistic case: Like the dot-com bubble, the crash destroys speculative excess but leaves valuable infrastructure. The fibre-optic cables remained useful even when Pets.com collapsed. Perhaps AI data centres and trained workforces will power future innovations we haven’t imagined yet.

The moderate case: A multi-year correction rather than a sudden crash. Valuations deflate gradually as companies fail to demonstrate ROI. Consolidation accelerates. The 95% of AI pilots already failing become 95% of AI companies closing.

The catastrophic case: Overleveraged firms default. The circular economy of companies investing in each other collapses. If 40% of companies in the Russell 2000 index already have no earnings or negative earnings, a broad AI crash could trigger cascading failures across the market.

The human cost of technological faith

In theory at least, if Universal Credit’s systems failed in a live situation, real people wouldn’t be able to access the benefits they desperately needed. If Rural Payments had crashed after going live, farmers would have faced financial hardship because they couldn’t claim subsidies. When Smartlandlord’s IT decisions killed off the project, employees lost jobs.

The AI bubble will have similar casualties. Not just shareholders losing money, though that will be painful enough, but workers losing jobs when companies that over-invested can’t meet payroll. Services shutting down when the funding dries up. Genuine AI innovations starved of investment because the entire sector has been tainted by excess.

What enterprises should actually do

If you’re commissioning AI work, treat it like any other IT project. Which means learning from the catastrophic failures:

Demand proof, not promises. Don’t accept demos. Require evidence of similar implementations at a comparable scale. If a vendor can’t show you working systems serving real users, walk away.

Start with the problem, not the technology. What are you actually trying to achieve? What’s the smallest possible implementation that could demonstrate value? Don’t let vendors tell you the solution first. They’ll always recommend what they’re selling.

Invest in proper implementation. The £250,000 SharePoint implementation would probably have worked. The £65,000 bodge job collapsed. Proper architecture, thorough testing and genuine expertise cost money. Cutting corners on implementation destroys value.

Plan for failure modes. What happens when the AI makes mistakes? When the data is incomplete? When users behave unpredictably? If your plan assumes perfection, your plan will fail.

Listen to people raising concerns. When someone says “this won’t work because…” don’t dismiss them as lacking vision. They might be the only person in the room who understands the reality underneath the PowerPoint slides.

The brutal truth

AI will transform some things. It already has. But transformation doesn’t justify irrational valuations, circular investment schemes or accounting manipulations.

OpenAI’s revenues of $13 billion in 2025 don’t justify spending commitments of $1.4 trillion when it’s losing $12 billion per quarter. Nvidia’s dominance doesn’t guarantee its valuation when major customers are developing their own custom chips. The infrastructure being real doesn’t mean the companies building it deserve infinite multiples.

I’ve watched enough government IT disasters to recognise the warning signs. The revolutionary rhetoric. The faith-based technology choices. The dismissal of concerns. The belief that throwing more money at broken systems will eventually make them work.

Universal Credit cost billions and delivered handwritten notes in a job centre. Rural Payments spent millions on a system that couldn’t handle eight simultaneous users. Smartlandlord bankrupted itself by letting IT dogma override business sense.

The AI bubble will burst. The only questions are when, how hard and how many casualties there will be when it does.

Anyone telling you “this time it’s different because thermodynamics” is selling you something. Usually expensive hardware, speculative shares or consulting services.

The music always stops eventually. The question is whether you’re still dancing when it does.

Workplace Insights coach Adrie van der Luijt

Adrie van der Luijt is CEO of Trauma-Informed Content Consulting. Kristina Halvorson, CEO of Brain Traffic and Button Events, has praised his “outstanding work” on trauma-informed content and AI.

Adrie advises organisations on ethical content frameworks that acknowledge human vulnerability whilst upholding dignity. His work includes:

  • developing the UK’s national drink and needle spiking advice service used by 81% of police forces in England and Wales – praised by victim support organisations
  • creating user journeys for 5.6 million people claiming Universal Credit and pioneering government digital standards for transactional content on GOV.UK
  • restructuring thousands of pages of advice for Cancer Research UK‘s website, which serves four million visitors a month.