
Workplace Insights by Adrie van der Luijt
After four decades in digital transformation, I’ve learned to recognise the moment when a project crosses from ambitious to reckless. It’s not when the technology fails to deliver. It’s earlier than that. It’s when someone raises a concern about consequences and gets told we’ll figure that out later.
We never figure it out later. Later is when people lose their jobs. When vulnerable users can’t access services. When my 89-year-old mother receives a 17-digit security code to check her bin collection and can’t understand why the world suddenly requires her to have skills she’ll never possess.
I’ve been writing this week about digital exclusion and AI overspending driving job losses across public sector services. But there’s a deeper problem underneath both articles, and it’s the one nobody in the room wants to discuss: we keep building systems without asking whether we should be building them at all.
A recent Time article argues that AI regulation isn’t enough, that we need AI morals. The author writes: “The challenge of our time is to keep moral intelligence in step with machine intelligence.” I’d go further. The challenge is that we’ve spent decades deliberately keeping moral intelligence out of technology decisions, treating ethics as an inconvenience that slows down innovation.
I’ve sat in enough government project meetings to know how this works. Someone raises a concern about vulnerable users, about job losses, about consequences we haven’t thought through. There’s a moment of uncomfortable silence. Then someone senior says “let’s park that for now” or “we’ll address that in phase two” or my personal favourite, “we can’t let perfect be the enemy of good.”
What they mean is: we’ve already committed the spending, we’ve already made the promises and stopping now would mean admitting we made a mistake. So we push forward and the moral questions get parked permanently whilst we focus on delivery.
When I worked on Universal Credit, we had £2 billion in contracts with IT providers and management consultants. The IT side let the project down completely, the same pattern I witnessed working on rural payments for Defra. But by the time it became clear the technology wasn’t delivering what was promised, we were too far in to stop. I remember being the only one in the room at Government Digital Service events whose project didn’t make life better for the people involved. Having to write content to explain to elderly farmers with low digital literacy how to use the hexagon tool online to draw hedges within a 5% error margin. Finding out that the servers were at 100% capacity when eight people used it at the same time, for a service supposedly designed for hundreds of thousands of farmers. The moral questions about what happens to people when systems don’t work got subordinated to the practical question of how to salvage the spending.
Nobody asked whether we should be building a digital-by-default service when 4% of adults don’t have mobile phones. Nobody asked what happens to people in crisis when the system requires skills they don’t possess. Those weren’t treated as moral questions requiring answers before proceeding. They were treated as edge cases we’d handle later with support services.
Except later, when budgets tightened, those support services got cut. And the people we’d promised to help were left trying to navigate systems designed by people who’d never experienced their lives.
The Time article notes that technologists often describe ethics in computational terms: alignment, safety layers and feedback loops. But morality isn’t computational. It’s not something you can optimise or iterate towards. It requires understanding consequences before you create them, not figuring them out after you’ve already caused harm.
I’ve watched this play out across government digital services for decades. We bring in technical experts who understand code and infrastructure and user journeys. We bring in management consultants who understand efficiency and transformation and delivery frameworks. What we don’t bring in are people who understand vulnerability, exclusion and the lived experience of being failed by systems.
When my mother can’t access digital services because they require multi-factor authentication she’ll never comprehend, that’s not a technical problem requiring a technical solution. It’s a moral failure. Someone decided that security was more important than access. Someone decided that efficiency was more important than inclusion. Someone decided that digital-by-default was worth excluding millions of people.
Those were moral decisions, even if nobody framed them that way. And the people making those decisions had neither the expertise nor the inclination to understand their consequences.
The same thing is happening now with AI. Governments are spending catastrophically on infrastructure whilst laying off civil servants and contractors who understand how services actually work. Nobody’s asking whether we should be betting everything on technology that might not deliver. Nobody’s asking what happens to organisational expertise when we replace experienced workers with AI systems that produce outputs that require 2 hours of fixing.
Those are moral questions. But they’re being treated as technical questions, which means they’re being answered by people who think in terms of optimisation and efficiency rather than dignity and consequences.
The Time article argues that dignity insists a person’s worth is intrinsic, not measurable in data points or economic output. This is precisely what gets lost when we treat moral questions as technical problems.
I’ve spent my career trying to make complex systems clearer and more humane. I pioneered GDS standards for transactional content. I’ve worked on services touching millions of people. I understand the pressure to deliver efficiency in a world of minimum viable product thinking, to demonstrate value, to show that spending produced results.
But efficiency isn’t a moral principle. It’s a measurement. And when we optimise for efficiency without asking what we’re sacrificing to achieve it, we make moral decisions whilst pretending we’re making technical ones.
When governments implement digital-by-default services, they’re choosing efficiency over access. When they cut support services to balance budgets after technology overspending, they’re choosing financial sustainability over human need. When they lay off experienced workers and replace them with AI systems, they’re choosing optimisation over expertise.
None of these are presented as moral choices. They’re presented as inevitable consequences of modernisation, as necessary adaptations to changing technology, as pragmatic responses to financial pressure. But they’re moral choices. We’re just refusing to acknowledge them as such.
I’ve been part of these decisions. I’ve written content for services that excluded vulnerable users. I’ve worked on projects where we knew the technology wasn’t ready but delivered anyway because stopping would mean admitting failure. I’ve sat in meetings where concerns about consequences got parked because addressing them would slow delivery.
I’m not claiming moral superiority. I’m saying I’ve seen what happens when moral questions get treated as inconveniences rather than as essential considerations that should shape whether we proceed at all.
My mother is 89, deaf, nearly blind and traumatised by the digital services I spent my career building. She became vulnerable when my father died. She suddenly had to navigate systems they’d previously managed together. The systems didn’t anticipate this transition. They punished her for experiencing it.
This wasn’t an accident. It was a consequence of decisions made by people who never asked whether we should be making essential services conditional on digital access. Who never asked what happens to people during major life transitions when systems assume constant capability. Who never asked whether efficiency for the majority justified the exclusion of the vulnerable minority.
Those questions weren’t parked because they were difficult to answer. They were parked because answering them honestly would have required admitting that digital-by-default was the wrong approach for essential services. That some things are too important to make conditional on having a smartphone and knowing how to use it.
The same pattern is playing out with AI. Governments and organisations are spending approaching $1 trillion on infrastructure that generates $30 billion in revenue. When those numbers don’t add up, they cut jobs and claim AI makes workers redundant. But the real moral question isn’t whether AI can do the work. It’s whether we should be betting everything on technology that isn’t delivering, whilst destroying organisational expertise we’ll desperately need when the technology fails.
Nobody’s asking that question. Instead, we’re getting the same tired narrative about inevitable technological progress and the need to adapt. As if the problem is workers not being flexible enough, not organisations making reckless decisions about technology they don’t understand.
The Time article argues that ethical due diligence should become as routine as financial due diligence. Before asking how large a technology might become, we should ask what kind of behaviour it incentivises, what dependencies it creates and who it leaves behind.
I’d add: we should ask who gets to participate in these discussions. Because right now, the people making decisions about AI deployment are almost never the people who’ll bear the consequences of those decisions.
When civil servants lose jobs because of AI overspending, they weren’t in the room when ministers decided to bet on technology that wasn’t ready. When my mother can’t access services because they require digital skills she’ll never have, she wasn’t consulted about whether digital-by-default was acceptable. When farmers couldn’t claim agricultural subsidies because systems required Photoshop skills, they weren’t asked whether this was a reasonable requirement.
The people building systems and the people using systems occupy completely different worlds. And until we’re willing to include users’ voices, particularly vulnerable users, in decisions about whether to build these systems at all, we’ll keep creating technology that harms whilst claiming we’re helping.
During my work on drink and needle spiking reporting services for police, victim support organisations praised the content as “victim-focused” and “excellent” .That didn’t happen because I’m particularly skilled. It happened because we centred victims’ voices from the start, because we asked them what they needed rather than telling them what we thought they should want.
That approach should be standard for any service affecting vulnerable people. Instead, it’s treated as exceptional, something we do when we have time and budget, not a fundamental requirement for proceeding ethically.
Here’s what I think the Time article is getting at and what my four decades in digital transformation have taught me: we need to recover the ability to say no.
Not “no, but here’s a workaround” or “no, but we’ll address that in phase two”. Just no. This is the wrong approach. These consequences are unacceptable. We shouldn’t be doing this.
I’ve never been in a government project meeting where someone successfully stopped a project by arguing it would cause unacceptable harm. I’ve been in plenty of meetings where people raised concerns that got parked. But stopping? That would require treating moral questions as more important than delivery schedules and spending commitments.
The Time article argues that we should be using technology to expand empathy, creativity and understanding, not to reduce human complexity into patterns of prediction. But we can’t expand empathy whilst excluding the people who most need our services. We can’t enhance understanding whilst laying off the people who possess expertise. We can’t preserve human dignity whilst treating people’s worth as measurable in data points and optimisation metrics.
And we can’t keep pretending that these are technical questions with technical solutions. They’re moral questions. They require moral intelligence, not machine intelligence. They require asking whether we should be doing something before we commit billions to doing it.
I wish I’d said no more often during my career. When projects were heading towards predictable harm and everyone knew it but nobody wanted to stop. When vulnerable users were being excluded and we told ourselves we’d fix it later. When spending commitments drove decisions that should have been driven by consequences.
I didn’t say no because I didn’t think it would make a difference. Because the momentum was too great, the commitments too large, the political pressure too intense. Because saying no would mean being removed from projects rather than changing their direction.
But silence is complicity. Every time I stayed quiet when I should have spoken, I was making a moral choice even if I told myself I was just doing my job and that the project would simply replace me if I did say no.
The AI spending crisis across government services is giving us another chance to ask the question we should have asked about digital-by-default: should we be doing this at all? Not can we make it work eventually, not will it deliver efficiency gains, not how do we optimise implementation. Should we be betting public services on technology that isn’t delivering whilst destroying organisational expertise?
The honest answer is no. We shouldn’t. The spending doesn’t justify the returns. The technology doesn’t work as promised. The consequences for workers and service users are predictable and unacceptable. This is reckless and we should stop.
But we won’t stop. Because stopping would mean admitting we made a mistake. Because the spending commitments are already made. Because politicians have already promised transformation. Because the people making decisions aren’t the people bearing consequences.
So we’ll keep going until the systems fail spectacularly enough that we can’t ignore them anymore. Until the job losses mount and organisational expertise vanishes. Until vulnerable people can’t access essential services and the political cost becomes too high. Until we’re forced to reckon with consequences we could have prevented if we’d been willing to ask moral questions before committing to technical solutions.
The Time article calls for a moral compass to guide AI development. I’d argue we need that compass for all technology decisions affecting public services, not just AI.
That compass would ask: Who benefits from this? Who pays the price? Are we optimising for efficiency at the expense of access? Are we confusing what we can do with what we should do? Are we including the voices of people who’ll be most affected? Are we willing to stop if the answers to these questions are unacceptable?
These aren’t complicated questions. They don’t require technical expertise to answer. They require moral courage to ask them honestly and act on the answers.
After four decades in digital transformation, I’ve learned that moral courage is the scarcest resource in technology development. Technical skills are abundant. Financial resources can be raised. Political will can be manufactured. But the willingness to stop a project because it will cause unacceptable harm, even when stopping means admitting failure, is almost non-existent.
The AI spending crisis is just the latest example of what happens when we lack that moral courage. We overspend on technology we don’t understand. We overpromise benefits we can’t deliver. We underdeliver results whilst overclaiming success. Then, when the consequences become undeniable, we blame users for not adapting, workers for lacking skills and critics for resisting progress.
We blame everyone except the people who made the decision to proceed without asking whether they should.
I don’t know if this article will change anything. The pattern is deeply embedded. The incentives all push towards proceeding despite consequences. The people with the power to stop projects aren’t the people bearing costs when projects fail.
But maybe acknowledging the pattern is a start. Maybe naming it as a moral failure rather than a technical challenge creates space to ask different questions. Maybe recognising that we’ve been treating moral decisions as technical problems helps us understand why we keep getting the same terrible outcomes.
Or maybe not. Maybe we’ll just keep building systems that harm whilst claiming we’re helping, keep excluding whilst claiming we’re including, keep destroying expertise whilst claiming we’re innovating.
But at least we’ll know we made a choice. These aren’t inevitable consequences of technological progress but predictable results of moral failures. That we could have asked whether we should before spending billions on whether we could.
And maybe, eventually, someone will be brave enough to say no.

Adrie van der Luijt is CEO of Trauma-Informed Content Consulting. Kristina Halvorson, CEO of Brain Traffic and Button Events, has praised his “outstanding work” on trauma-informed content and AI.
Adrie advises organisations on ethical content frameworks that acknowledge human vulnerability whilst upholding dignity. His work includes: