Workplace Insights by Adrie van der Luijt

the AI excuse

How tech spending became the perfect cover for cutting costs

Organisations are using AI as a convenient excuse for layoffs driven by catastrophic overspending on infrastructure, not by technological advancement making workers redundant.

When Amazon announced 14,000 corporate job cuts last week, executives were careful to frame it as AI-driven innovation. One top executive noted the current generation of AI is “enabling companies to innovate much faster than ever before.” Shortly afterwards, another Amazon representative, speaking anonymously to NBC News, admitted, “AI is not the reason behind the vast majority of reductions.” On an investor call, Amazon CEO Andy Jassy confirmed the layoffs were “not even really AI-driven”.

This isn’t surprising. What’s surprising is that we keep pretending to be surprised.

I’ve spent four decades watching organisations use whatever technology narrative is currently fashionable to justify decisions they’d already made for entirely different reasons. In the 1990s, it was the internet. In the 2000s, it was outsourcing. In the 2010s, it was digital transformation. Now it’s AI. The technology changes. The pattern doesn’t.

Companies overspend on infrastructure they can’t justify, fail to generate sustainable returns, then cut costs by shedding workers whilst claiming they’re innovating. We’ve seen this act before. AI is just the convenient excuse this time around.

The spending problem nobody wants to discuss

Tech companies are experiencing financial stress because of their huge spending on AI infrastructure. Amazon increased its total capital expenditure from $54 billion in 2023 to $84 billion in 2024, with an estimated $118 billion planned for 2025. Meta is securing a $27 billion credit line to fund its data centres. Oracle plans to borrow $25 billion annually over the next few years to fulfil its AI contracts.

These aren’t modest investments that didn’t quite pan out. These are catastrophic spending decisions that most organisations would be crucified for in normal circumstances. But because it’s AI, because it’s innovation, because everyone else is doing it, nobody questions whether spending $118 billion is remotely sensible when you’re not seeing returns that justify it.

The revenue side tells you everything you need to know. AI revenue won’t exceed $30 billion this year. Capital expenditures on AI cloud infrastructure are approaching $1 trillion for 2025. Those numbers don’t add up. They never did. But admitting that would mean acknowledging the spending was reckless rather than visionary, so instead we get layoffs blamed on AI advancement rather than AI overspending.

When I worked on Universal Credit in 2012, we were building the UK’s first digital-by-default public service. The IT contractors were on a £1 billion contract. Management consultants were on another £1 billion contract. The spending was enormous. But the IT side let the project down completely, a pattern I saw again later when working on CAP Rural Payments for Defra.

Universal Credit was eventually rebooted. I have no insider information about what happened after that reboot. What I can tell you from public record is that when financial pressure came, the narrative around cuts wasn’t “we spent £2 billion on contracts that didn’t deliver what was promised”. The narrative was about efficiency and transformation and doing more with less.

This is the pattern I’ve witnessed repeatedly across government digital projects. Massive spending on infrastructure and consultancy. Technology that doesn’t deliver. Then when cuts come, whether to JobCentre staff supporting claimants or to the teams building the services, they’re framed as inevitable consequences of digital transformation rather than as responses to failed spending decisions.

The myth of the skills gap

Fast Company reports that 40% of businesspeople surveyed have received “AI slop” at work in the last month and that it takes nearly two hours, on average, to fix each instance. They no longer trust their AI-enabled colleagues, finding them less creative and less intelligent or capable.

This matches everything I’ve seen working with organisations implementing AI tools. The technology doesn’t work as promised. People spend more time fixing AI-generated mistakes than they would have spent doing the work properly. Quality drops. Trust erodes. But rather than admitting the technology isn’t fit for purpose, organisations blame users for not having the right skills to work with AI effectively.

It’s the digital exclusion pattern all over again. When my 89-year-old mother can’t navigate multi-factor authentication to check her bin collection, we call it a skills gap. When farmers can’t use Photoshop online with narrow error margins to report land use for agricultural subsidies, we call it resistance to change. When employees produce worse work whilst using AI tools, we say they haven’t adapted to new ways of working.

At what point do we acknowledge that the problem isn’t the users, but the technology and the decisions behind implementing it?

I’ve watched this narrative play out across government digital services for decades. We implement systems that don’t work for the people who need to use them, then frame their struggles as personal failings rather than design failures. The technology becomes the excuse for excluding people whilst claiming we’re including them. AI is just the latest version of this same trick.

Who benefits from the confusion

The Fast Company article points out that OpenAI, Anthropic and other AI creators aren’t public companies required to release audited figures each quarter. Most big tech companies don’t separate AI from other revenues. Microsoft is the only one that does. We’re flying in the dark.

This opacity isn’t accidental. It benefits everyone who wants to use AI as justification for decisions that have nothing to do with AI capability.

When Amazon cuts 14,000 jobs and initially frames it as AI innovation before admitting it’s not AI-driven, that confusion serves a purpose. It makes workers feel like they’re losing jobs to inevitable technological progress rather than to executives who overspent on infrastructure and need to balance the books. It’s much harder to organise against technological inevitability than against poor financial management.

The same thing happened with digital transformation. When I worked on projects that excluded elderly farmers and vulnerable benefit claimants, the narrative was always about moving forward, embracing the future, becoming more efficient. Never about the fact that we’d committed to spending on systems that didn’t work for huge portions of our user base, then cut support services to make the numbers work.

The confusion helps organisations avoid accountability. If nobody knows exactly how much revenue AI generates, nobody can definitively say the spending was unjustified. If nobody can separate AI-driven efficiency gains from ordinary cost-cutting, nobody can prove the layoffs weren’t necessary. The opacity creates just enough doubt to make the decisions look defensible.

The college graduate problem

Fast Company notes that college graduates are having trouble finding jobs, with many young people convinced by the end-of-work narrative that there’s no point in preparing for jobs. Ironically, surrendering to this narrative makes them even less employable.

This is where the AI excuse does its most damage. Whilst organisations are using AI as cover for cost-cutting driven by overspending, they’re also fostering a genuine belief amongst young workers that their skills have no value. That everything they might learn to do will be automated. That there’s no point in developing expertise because AI will do it better.

I’ve seen this pattern before. When digital transformation was the fashionable narrative, entire professions were told they’d become obsolete. Executive assistants would disappear because of calendar software. Librarians would vanish because of Google. Content designers wouldn’t be needed because content management systems would handle everything.

None of that happened. What happened is that roles evolved, some badly paid entry-level work disappeared, and organisations discovered that the technology couldn’t actually replace the human judgement, context and relationships that made those roles valuable. But by then, we’d already damaged career pipelines by telling young people not to bother training for roles that were supposedly disappearing.

The AI narrative is doing the same thing now, but worse. Because this time the technology genuinely can produce something that looks like work, even if that work is often useless and takes hours to fix. So young workers see AI-generated content and code and presentations. They believe the narrative that this makes them redundant. They don’t see the two hours of cleanup. They don’t see the lost trust. They don’t see how much worse the final output often is.

What they see is organisations using AI as justification for not hiring them, and they internalise that as inevitability rather than as organisations using technology narratives to justify decisions driven by overspending.

What we’re not talking about

Here’s what nobody wants to say out loud: most of these layoffs would be happening anyway. Not because AI makes workers redundant. Because organisations overspent on AI infrastructure without sustainable business models and when you overspend badly, you cut costs. Laying off workers and asking those who remain to work harder is a tried-and-tested approach to cost-cutting. AI is just the convenient excuse this time.

During my time working on government digital services, I watched this pattern repeat endlessly. Big spending on technology that didn’t deliver what was promised. Pressure to justify the costs. Cuts to staff and support services are framed as efficiency gains or changing ways of working. Vulnerable users left to navigate systems that didn’t work for them, with fewer people available to help.

The difference with AI is the scale of the spending and the brazenness of the narrative. We’re approaching $1 trillion in capital expenditures for infrastructure that generates $30 billion in revenue. That’s not a temporary mismatch whilst technology matures. That’s a fundamental problem with the business model. But admitting that would require acknowledging that the spending was reckless, so instead, we get layoffs blamed on AI advancement.

The people losing jobs aren’t losing them because AI can do their work better. They’re losing them because their organisations spent money they couldn’t justify and need to cut costs. The people struggling to find work aren’t struggling because AI makes them obsolete. They’re struggling because organisations are using AI narratives to justify hiring freezes whilst they try to make the numbers work after catastrophic overspending.

None of this is about technological capability. It’s about financial decisions and the narratives organisations use to justify those decisions.

Why this matters for everyone

You might think this only matters if you work in tech. You’d be wrong.

The AI-driven layoffs narrative affects everyone because it changes how we think about work, value and human capability. When organisations successfully use AI as justification for cost-cutting, it normalises the idea that technological advancement makes human workers redundant. That becomes the default explanation for any job losses, regardless of actual cause.

This matters for content practitioners because we’re often the ones creating the narratives that organisations use to justify their decisions. We write the blog posts announcing layoffs. We create the internal communications explaining the restructures. We develop the content strategies that frame technological change as inevitable progress.

We need to be honest about what we’re seeing. AI spending, not AI capability, is driving layoffs. Organisations that overspent on infrastructure are cutting costs and AI provides convenient cover. The technology often doesn’t work as promised and users spend hours fixing mistakes whilst being blamed for not having the right skills.

This isn’t inevitable technological progress. It’s poor financial management dressed up as innovation. And until we’re willing to say that honestly, we’re complicit in narratives that damage workers whilst protecting organisations from accountability for reckless spending decisions.

I’ve spent my career trying to make complex systems clearer and more humane. But clarity doesn’t help when the fundamental narrative is dishonest. When organisations claim to be cutting jobs because AI makes workers redundant, but actually they’re cutting jobs because they overspent on AI infrastructure and need to balance the books, that’s not a communication problem. That’s a truth problem.

The AI excuse works because it sounds sophisticated. Because it plays into fears about technological unemployment that have been around for centuries. Because it shifts responsibility from executives who made bad spending decisions to workers who supposedly aren’t skilled enough for the AI future.

But the pattern is the same as it’s always been. Organisations spend money they can’t justify, then cut costs by shedding workers whilst claiming innovation made them redundant. The technology changes. The pattern doesn’t.

And until we’re willing to name that pattern honestly, we’ll keep seeing it repeat with whatever technology narrative is fashionable next. The spending will continue. The layoffs will continue. And workers will continue to be told their value is diminishing, when actually what’s diminishing is organisations’ willingness to be honest about the consequences of their decisions.

Workplace Insights coach Adrie van der Luijt

Adrie van der Luijt is CEO of Trauma-Informed Content Consulting. Kristina Halvorson, CEO of Brain Traffic and Button Events, has praised his “outstanding work” on trauma-informed content and AI.

Adrie advises organisations on ethical content frameworks that acknowledge human vulnerability whilst upholding dignity. His work includes:

  • developing the UK’s national drink and needle spiking advice service used by 81% of police forces in England and Wales – praised by victim support organisations
  • creating user journeys for 5.6 million people claiming Universal Credit and pioneering government digital standards for transactional content on GOV.UK
  • restructuring thousands of pages of advice for Cancer Research UK‘s website, which serves four million visitors a month.