Workplace Insights by Adrie van der Luijt

Reading the government's digital roadmap

Why good intentions aren't enough when you're designing services for vulnerable users

The new digital roadmap has the right ambitions, but it risks optimising for efficiency rather than outcomes for the people who need government services most.

The government’s new digital roadmap has landed with considerable fanfare. As someone who’s spent four decades watching digital transformation programmes promise to revolutionise public services, I read it with interest. And caution. And, if I’m honest, a fair bit of déjà vu.

I should say upfront: I want this to work. I’m currently contracting for the Department for Education, doing content design work that tries to make government services clearer and more accessible. I’ve supported 5.6 million Universal Credit claimants through digital transformation. I’ve been in the room for discussions about disability assessments that revealed just how brutal our services can be when we prioritise efficiency over outcomes. So when I read this roadmap, I’m reading it as someone who genuinely wants to see better public services, but who’s also learned to spot the structural patterns that turn good intentions into predictable harm.

What the roadmap gets right

The talent section acknowledges a real crisis, but – in my view – misdiagnoses it. Yes, only 6% of central government staff work in digital and data roles. Yes, £26 billion goes to external expertise. But the roadmap presents this as a simple equation: too much money on contractors, need more permanent staff.

I’ve been one of those contractors since 2012. The reality is considerably more complex.

That £26 billion isn’t a homogeneous blob of “contractors doing work civil servants should do”. It includes management consultants billing eye-watering day rates for strategic advice that evaporates when they leave. It includes massive IT contracts with big suppliers, the kind that promise transformation and deliver expensive failure. It includes AI consultants selling productivity miracles. And yes, it includes skilled specialists like content designers, user researchers and developers filling genuine temporary expertise gaps.

These are not the same thing.

I’ve watched my own team expand from 9 people to 53 and collapse back to 12 within three weeks as budgets shifted and priorities changed. That’s not an environment where you can build long-term institutional capability through permanent hires. That’s an environment that requires flexible expertise you can scale up and down as the work demands.

The instability isn’t caused by using contractors. The instability causes the need for contractors.

Meanwhile, top contractors get pulled off projects mid-stream because another department offers higher day rates to lure them to a seemingly more important project, creating a cat-and-mouse game of internal poaching that wastes both money and momentum. This isn’t a contractor problem. It’s a coordination problem, a budgeting problem and a priorities problem.

Recent governments have simultaneously overrelied on AI as a silver bullet whilst kowtowing to tabloid criticism about “spending billions on consultants”. The result: they’ve been reluctant to invest in the skilled contractors who could actually deliver while continuing to hand massive contracts to IT firms and management consultancies that fail to deliver value.

I’ve worked on multiple projects where £1 billion contracts to IT or management consultants – or both – produced expensive disasters. Where the IT side of the project was almost entirely absent, despite £1 billion contracts. That pattern hasn’t changed. What has changed is the narrative blaming “contractors” as if we’re all in the same category.

Here’s what the roadmap should acknowledge: the government desperately needs permanent internal expertise to judge whether private sector suppliers are delivering value for money. Not to replace all external expertise – you can’t and shouldn’t – but to maintain the institutional knowledge and technical capability to call bullshit when a big consultancy promises the moon and delivers expensive PowerPoint. And yes, I have certainly seen that film before.

The commitment to competitive pay frameworks for permanent specialists is necessary. But it’s not sufficient. You also need:

  • Stable, multi-year funding so teams can plan beyond the next spending review
  • Protected budgets for the skilled contractors who fill genuine temporary gaps
  • Clear criteria for when you need permanent capability versus when you contract
  • Institutional memory about which suppliers actually deliver versus which ones just have good sales teams
  • The courage to say no to the next AI miracle cure being sold by the latest consultancy

Without these, you’ll keep spending billions on the wrong kind of external expertise while losing the contractors who could actually help.

The measurement vacuum

But here’s where it gets tricky. The roadmap commits to “creating clear, consistent ways to measure digital inclusion, user experience, resilience, AI adoption and value for money”.

Notice what’s in that list and what isn’t.

We’re measuring adoption rates, transaction times and user satisfaction scores. We’re not measuring mandatory reconsideration rates on Universal Credit decisions. We’re not measuring how many disability assessment decisions get overturned at tribunal. We’re not measuring how many people gave up on accessing a service because the digital barriers were too high.

This isn’t an oversight. It’s a design choice, even if an unconscious one.

When you measure “user satisfaction”, you’re measuring the satisfaction of people who successfully completed the service journey. You’re not measuring the experience of the grandmother who doesn’t have a smartphone, the trauma survivor whose executive function is impaired, the person with aphasia who can’t process the language in your forms or the family using public library computers on 30-minute time limits.

These people don’t show up in your satisfaction surveys. They show up at food banks. They show up at Citizens Advice. They show up in tribunal statistics, but no one is connecting them back to the original digital service design.

The roadmap mentions working with local government to “reduce the burden on you” and improving “access to devices and connectivity, boosting digital skills and confidence and providing local support in communities across the UK”.

That’s digital inclusion as a bolt-on charity programme. It’s not digital inclusion as a structural design principle.

AI and the accountability gap

The AI section of the roadmap is particularly revealing. It promises “quicker diagnoses in healthcare, better-targeted training opportunities or faster access to information and advice”.

Let me translate that through the lens of someone who’s studied how these systems actually work: AI will triage patients, decide who qualifies for training funding and filter access to human advisors.

Now, AI can do these things well. But only if we’re measuring the right outcomes and maintaining human accountability at the right points in the system.

The roadmap states: “We’re building systems that are safe, explainable and equitable, so you can trust how decisions are made”. It commits to “responsible AI adoption” with “frontline public servants overseeing individual decisions and senior leaders remaining responsible for system-level outcomes”.

This is the right language. But there’s a structural tension here that the roadmap doesn’t address.

If your AI system is designed to improve “productivity” and “efficiency” – both mentioned repeatedly – then your success metrics are about reducing cost per transaction and increasing throughput. Those metrics create pressure to remove the human touchpoints that catch edge cases, that spot the person whose situation doesn’t fit the algorithm’s assumptions and that override the system when the system is clearly wrong.

I’ve watched this happen. The system flags someone as potentially fraudulent because their circumstances don’t match the pattern. The caseworker who might have spotted the mistake – that this is a domestic abuse survivor whose chaotic living situation is a symptom of trauma, not fraud – has been eliminated in the name of efficiency. The decision gets automated. The person loses their benefits. Six months later, it gets overturned at a tribunal, but by then the damage is done.

The roadmap needs to explicitly state where, in each service journey, a qualified human being with decision-making authority must review AI recommendations. What are the triggers for mandatory human review? How do we measure whether those human touchpoints are actually functioning or whether they’re being eroded by efficiency pressures?

When “digital by default” becomes structural violence

Here’s the uncomfortable truth that the roadmap acknowledges but doesn’t fully address: “Almost half of central government and NHS services are still not available online”.

This is framed as a failing, something to be fixed through digital transformation.

But for the 4% of UK adults who lack mobile phone access, for the people whose cognitive conditions make digital interfaces genuinely inaccessible even with assistive technology and for the trauma survivors whose executive function is impaired, those offline services are a lifeline.

The Digital Inclusion Action Plan sits under the “Join up services” section. It focuses on “increasing access to devices and connectivity, boosting people’s digital skills and confidence and providing local support”.

This assumes that the problem is on the user’s side. They lack devices. They lack skills. They lack confidence.

But what about the user who has a smartphone and reasonable digital literacy, but whose ADHD means that a 47-screen Universal Credit application that requires them to remember information from six months ago, uploaded in a specific format, within a rigid timeframe, is genuinely impossible?

What about the person with aphasia who can process simple sentences of about five words, but your service content averages 15-20 words per sentence because it’s optimised for the “average” user?

What about the person whose right to benefits shouldn’t be conditional on their ability to use digital technology at all?

The roadmap should explicitly commit to maintaining functional offline alternatives with equivalent processing times and outcomes. Not as a temporary measure until everyone gets “digitally included”, but as a permanent design principle that recognises that digital exclusion isn’t always about a lack of access or skills; sometimes it’s about genuine cognitive, sensory or circumstantial barriers that no amount of training will overcome.

The trauma-informed gap

I recently read Rebekah Barry’s “Considerate Content”, which draws on her work creating accessible content for Citizens Advice and the Department for Work and Pensions. She makes a crucial point: around 70% of people with aphasia for three months have depression. Many people with ADHD experience hyperfocus on compelling tasks but struggle with mandatory bureaucracy. Autistic people may interpret things literally, making metaphors and idioms in service content genuinely confusing rather than mildly annoying.

The roadmap doesn’t mention trauma once. It doesn’t acknowledge that post-pandemic, cognitive impairment affects far broader populations than it did in 2019. It doesn’t recognise that the people most likely to need government services are also the people most likely to be dealing with cognitive load from poverty, trauma, mental health challenges or neurological conditions.

When the roadmap promises “personalised and proactive services through the GOV.UK app,” I want to know: personalised based on what data? Proactive in what ways? If the personalisation is based on previous interactions and the person’s previous interactions were all marked by cognitive overwhelm and confusion, are you personalising toward more confusion?

The CustomerFirst unit promises to “transform critical government services for citizens” and “redesign services from end-to-end”. This could be brilliant. Or it could be another layer of digital optimisation that works beautifully for people who don’t need help and fails catastrophically for people who do.

What would actually help

If I were writing the roadmap from the ground up, here’s what I’d add:

Mandatory trauma-informed design standards: Every service should be designed assuming users are cognitively impaired, dealing with multiple crises, using unreliable technology and unable to remember information from one session to the next. That’s your baseline, not your edge case. And it is backed up by research findings around the globe.

Outcome measures that matter: Track tribunal overturn rates. Track mandatory reconsideration success rates. Track the gap between people who start a service journey and people who complete it, then talk to the people who dropped out. Track how many people show up at food banks in postcodes where you’ve just digitised a benefits service.

Statutory offline alternatives: Make it a legal requirement that every digital service maintains a functional offline alternative with equivalent processing times. Not a phone line that tells you to go online. An actual alternative that works.

Human override points: Map exactly where in each service journey a qualified human being must review AI recommendations. Make those touchpoints protected from efficiency pressures. Measure whether they’re functioning.

Cognitive load budgeting: Before you add any feature, any notification or any “helpful” reminder, calculate the cognitive load cost. Then cut it by half. Then cut it again.

The path forward

I don’t think the people writing this roadmap are villains. I think they’re professionals trying to solve genuine problems with limited resources and competing pressures. I think they genuinely believe AI and digital platforms can deliver better services.

They might be right. But only if we design for the people who need help most, not the people who need it least. Only if we measure outcomes that matter, not just metrics that look good on dashboards. Only if we maintain human judgement at the points where algorithmic decisions affect people’s ability to eat, stay housed or access healthcare.

The roadmap has the right ambitions. It just needs sharper teeth around accountability and a clearer commitment to the people who currently get left behind when we optimise for efficiency.

Because if we get this wrong, we won’t just have wasted £26 billion. We’ll have built the infrastructure for a more efficient form of structural violence, wrapped in the language of transformation and personalisation and user-centred design.

And we can honestly do better than that.

Workplace Insights coach Adrie van der Luijt

Adrie van der Luijt is CEO of Trauma-Informed Content Consulting. Kristina Halvorson, CEO of Brain Traffic and Button Events, has praised his “outstanding work” on trauma-informed content and AI.

Adrie advises organisations on ethical content frameworks that acknowledge human vulnerability whilst upholding dignity. His work includes:

  • developing the UK’s national drink and needle spiking advice service used by 81% of police forces in England and Wales – praised by victim support organisations
  • creating user journeys for 5.6 million people claiming Universal Credit and pioneering government digital standards for transactional content on GOV.UK
  • restructuring thousands of pages of advice for Cancer Research UK‘s website, which serves four million visitors a month.