Workplace Insights by Adrie van der Luijt

When AI meets vulnerability

What I learned between Whitehall and the Money Advice Trust

The rush to implement AI in public services risks automating exclusion unless we centre vulnerability in every design decision.

In July, I delivered a virtual keynote to fifty practitioners at the Money Advice Trust’s Vulnerability Academy, talking about trauma-informed content design. It went well, I think, though virtual events make it hard to read the room when the room is fifty separate screens. The questions afterwards suggested I’d struck a nerve, particularly around how we design forms and processes for people in crisis.

During lockdown, I helped the Grants Management Function at the Cabinet Office deliver counter-fraud tools for Covid-19 grants. Two weeks ago, Penny Horner-Long, Head of the Grants Management Function, posted on LinkedIn about how her team is embracing AI. She wrote about using LLMs to accelerate sharing of evaluation insights, improve understanding of what works across sectors, speed up grant delivery processes and strengthen funding due diligence.

She was careful to mention they’re taking an ethical approach to mitigate against unintended bias. Reading her post took me back to 2020, when I worked on the Spotlight counter-fraud tools for the Cabinet Office. Back then, we were just starting to imagine what AI might do for grants management. Now it’s happening. 

Obviously, it’s been a couple of years since I was involved. But I’m keen to know how the tools evolved. So here are my thoughts, based on the limited information that is available to non-civil service people. They are not a criticism and not even an informed guess. They are observations from my perspective as a trauma-informed content designer. 

When algorithms meet human distress

The contrast between my vulnerability keynote and Penny’s AI announcement captures something that’s been troubling me. We’re racing to implement AI across public services, but I’m not convinced we’re having honest conversations about what happens when algorithms meet human distress.

I’ve been thinking about this gap constantly recently. The practitioners at the Vulnerability Academy work with people whose lives have fallen apart. They see how seemingly simple processes become insurmountable obstacles when you’re dealing with trauma.

Meanwhile, the conversation about AI in government focuses on efficiency, fraud detection and pattern recognition. Both conversations matter. But they’re happening in separate rooms and that’s a problem.

The invisible line between error and crisis

My work on Spotlight taught me how the government thinks about fraud and error. The focus is on protecting public money, catching the cheats, ensuring compliance. Fair enough. But after years of working with vulnerable users, first at Cancer Research UK, then the Metropolitan Police Service and now in financial services, I know that the line between error and crisis is often invisible to systems designed to catch fraudsters.

Someone filling in a grant application inconsistently might be trying to defraud the system. Or they might be dealing with cognitive impacts of medication, trauma that’s fractured their ability to remember dates or shame that makes them unable to write certain truths. An AI trained to spot fraud will flag both the same way. But the interventions needed couldn’t be more different.

The FCA has been clear about vulnerability in financial services. Their guidance doesn’t suggest good practice; it mandates it. Firms must consider vulnerability in every process, every communication, every decision point. They identify four key drivers of vulnerability: health, life events, resilience, and capability. Each intersects with the others in ways that create unique vulnerability profiles that shift and change over time.

Bias in AI is not just about protected characteristics

Now think about applying that complexity to AI-powered grant assessments or automated service delivery. The technology that promises to make services faster and fairer might actually amplify barriers for those who need help most.

During my time at the Met, I learned how digital systems designed for efficiency became walls that traumatised victims couldn’t climb. Previously built beautiful, streamlined reporting systems worked perfectly for people who’d never experienced trauma. For everyone else, who were most people reporting crimes, it was like asking someone with broken fingers to type their story. We listened and learned to do better.

What particularly troubles me is how we talk about bias in AI as if it’s just about protected characteristics. Yes, we need to ensure AI doesn’t discriminate based on race, gender or disability. But vulnerability bias is subtler. It’s about systems that work brilliantly for people who are coping and fail catastrophically for people who aren’t. And most people interacting with public services aren’t coping, not really. They’re muddling through crises with whatever energy they have left.

Designing for ideal users who don’t exist

I started in digital transformation in 1987, building one of Holland’s first government digital projects. Back then, the challenge was explaining what the internet was. Now, nearly four decades later, I watch us making familiar mistakes with unfamiliar technology. We’re still designing for ideal users who don’t exist, still measuring success by processing speed rather than human outcomes, still confusing digitisation with transformation.

Penny’s post mentions taking care to ensure an ethical approach. I worked with Penny and know that intention is genuine. But ethics in AI isn’t just about avoiding discrimination. It’s about recognising that the people these systems serve are often at their lowest point. They’re not edge cases to be handled with exception processes. They’re the norm.

We risk losing human judgement

During my years building DeskDemon into a US and UK market leader for executive assistant resources, I learned that successful technology adoption isn’t about the technology. It’s about understanding the humans who’ll use it. Every Executive Assistant I worked with had developed sophisticated ways of reading human complexity, knowing when efficiency needed to yield to empathy, understanding that some problems can’t be optimised away.

That human judgement is what we risk losing. Not through malice or incompetence, but through misunderstanding what public service actually means. It’s not about processing applications efficiently. It’s about helping people when they need it most. The Grants Management Function’s evolution to embrace AI could be revolutionary if it embraces this truth. But if it just speeds up existing processes without rethinking them for vulnerability, we’ll have failed before we’ve started.

Bringing conversations into the same room

The path forward isn’t about choosing between efficiency and empathy. Without empathy, we won’t achieve genuine efficiency because every person who gives up is a failure, regardless of how quickly we processed their incomplete application. The question isn’t whether AI can learn trauma-informed practice. It’s whether we’ll insist that it must.

We need to stop treating vulnerability as something to handle after we’ve designed our systems. The practitioners at the Vulnerability Academy know what their clients need. The teams developing AI tools know what’s technically possible. These conversations need to happen in the same room, literally or virtually, before we embed approaches that will shape public services for decades.

I think about Penny’s mention of loads more to come in AI development for grants. I think about the fifty practitioners who spent Tuesday afternoon learning about trauma-informed design. I think about my own journey from building early digital services to understanding vulnerability. These threads need to weave together.

Are we transforming public services or digitising exclusion?

The technology is neutral. Our choices about how we deploy it are not. And right now, we’re choosing efficiency over understanding, speed over support, automation over the kind of careful human judgement that recognises when someone’s inconsistent answers aren’t fraud but fear.

Next time someone shows you their impressive AI implementation, ask them how it handles vulnerability. Ask them if they’ve consulted practitioners who work with people in crisis. Ask them what happens when trauma makes someone’s truth look like lies. Their response will tell you whether they’re transforming public services or just digitising exclusion.

The future of public service AI isn’t about choosing between catching fraudsters and helping vulnerable people. It’s about building systems sophisticated enough to tell the difference. Until we do that, we’re just automating the barriers that already exist and wondering why digital transformation never quite delivers what we promised.

Workplace Insights coach Adrie van der Luijt

Adrie van der Luijt is CEO of Trauma-Informed Content Consulting. Kristina Halvorson, CEO of Brain Traffic and Button Events, has praised his “outstanding work” on trauma-informed content and AI.

Adrie advises organisations on ethical content frameworks that acknowledge human vulnerability whilst upholding dignity. His work includes:

  • developing the UK’s national drink and needle spiking advice service used by 81% of police forces in England and Wales – praised by victim support organisations
  • creating user journeys for 5.6 million people claiming Universal Credit and pioneering government digital standards for transactional content on GOV.UK
  • restructuring thousands of pages of advice for Cancer Research UK‘s website, which serves four million visitors a month.