Workplace Insights by Adrie van der Luijt

both sides now

Bridging the gap between government AI readiness and AI citizen experience

A content designer's perspective on balancing GDS's focus on internal AI readiness with essential human-centred design considerations for vulnerable citizens accessing AI-powered public services.

GDS recently published fascinating research on how civil servants are approaching AI implementation across government.

Their findings reveal a workforce grappling with understanding AI capabilities, managing risks, and developing the skills needed to use these technologies effectively.

What struck me most, however, was what’s missing from this conversation. While GDS rightly focuses on government readiness – the internal implementation side of AI – there’s limited exploration of the AI citizen experience: how these AI systems will affect people’s interactions with public services, particularly for vulnerable users.

Having worked on content design for Universal Credit and Met Police services since 2012, I’ve seen firsthand how even conventional digital services can challenge citizen interactions with government. AI introduces an entirely new dimension to the AI citizen experience.

Successful human-centred AI implementation in government requires both perspectives: the internal capability focus that GDS outlines and the AI citizen experience considerations I explore in my work. This post examines how we might bridge that gap.

The two sides of AI public service design

The GDS research gives us valuable insight into the internal challenges of AI adoption:

  • Civil servants need clearer understanding of AI capabilities and limitations
  • Departments require governance frameworks to manage risks
  • Staff need upskilling and practical examples to learn from
  • Leaders want inspiration about real capabilities

These are essential foundations. But they represent only half of the equation. The other half focuses on the AI citizen experience:

  • How citizens will experience and navigate AI-powered public services
  • Whether vulnerable citizens can effectively use these AI systems
  • How to maintain empathy and connection in the AI citizen experience
  • When human intervention should be prioritised over AI efficiency

Both sides must be addressed simultaneously if human-centred AI is to enhance rather than complicate the AI citizen experience with government services.

Where GDS research leaves gaps in the AI citizen experience

The GDS article notes that “AI changes the way we interact with technology, and the way we build digital products. This change can be scary: it might change how frontline staff deliver their services, what digital and data roles do or how they collaborate.”

This observation is insightful for internal audiences. But it stops short of examining how these same changes affect the AI citizen experience, particularly for those already struggling with digital government services.

While the research acknowledges concerns about “privacy, bias, ethics, concerns around plagiarism, security and potential for misuse,” it approaches these primarily as implementation challenges rather than factors that shape the AI citizen experience.

What’s missing is the citizen perspective: how will people navigate the AI citizen experience when interacting with public services? Will they know when they’re talking to AI versus humans? How might trauma survivors respond to automated systems when reporting sensitive issues?

Bringing content design perspective to improve the AI citizen experience

My work with Universal Credit revealed deep challenges around sharing personal data with government websites.

At the Met Police, we discovered that victims of suspected drink spiking needed to feel genuinely heard and understood, a challenge that takes on new dimensions when designing the AI citizen experience.

Content design offers a crucial bridge between technical implementation and the AI citizen experience. It’s not merely about writing instructions or explanations; it’s about crafting interactions that balance efficiency with human needs while being transparent about the nature of automated systems.

The “computer says no” phenomenon already complicates government service interactions. AI could either worsen this alienation or dramatically improve the AI citizen experience, depending entirely on how we design with human needs in mind.

Specific considerations missing from the conversation

Trauma-informed AI approaches

The GDS research doesn’t address how trauma might affect interactions with AI systems. Yet many government services – from reporting crimes to seeking benefits – are accessed during times of crisis or vulnerability.

A trauma-informed approach to AI would:

  • Clearly distinguish AI-supported from human-supported parts of the journey
  • Recognise when emotional needs require human intervention
  • Avoid creating interactions that feel dismissive or mechanistic
  • Provide clear, straightforward pathways to human support


Setting clear boundaries for AI use

Not all government services are equally suitable for AI implementation. We need clearer boundaries about:

  • Which parts of a service can be enhanced by AI
  • Which moments require human judgement and empathy
  • How to design smooth transitions between automated and human interactions
  • When efficiency should take a back seat to emotional safety


Transparent identity without alienation

The GDS research doesn’t address the delicate balance of creating AI interactions that are:

  • Honest about their automated nature
  • Yet don’t feel cold or inhuman
  • Clear about limitations without creating frustration
  • Empathetic without being deceptive

This balance is particularly important for government services, where trust is already fragile and the impacts of decisions can be life-altering.

Practical recommendations for bridging the gap

For the AI Playbook

The GDS article mentions a new AI Playbook for UK Government. This resource could be strengthened by incorporating:

  • Guidance on transparent AI identity and appropriate tone
  • Frameworks for identifying when human intervention is necessary
  • User research approaches specifically focused on vulnerable users’ interactions with AI
  • Methods for testing citizen trust in AI government services
  • Content design principles for explaining automated decisions


For digital teams implementing AI

Teams working on AI implementation should:

  • Include content designers who understand the AI citizen experience from the earliest stages of development
  • Conduct specific user research with vulnerable populations on their AI citizen experience
  • Create clear escalation paths from AI to human support when the AI citizen experience breaks down
  • Test systems not just for functionality but for the quality of the AI citizen experience
  • Develop metrics that measure both efficiency AND the positive impact on the AI citizen experience

For governance frameworks

When evaluating AI systems, governance should consider:

  • Whether the system builds or erodes citizen trust
  • How transparent the system is about its automated nature
  • Whether appropriate human touchpoints are maintained
  • How the system handles emotionally complex situations
  • If the system creates a two-tier experience for digitally confident versus vulnerable users

Call to action

The GDS research presents an opportunity to begin a more holistic conversation about AI in government. I’d encourage:

  • Collaboration between technical implementation teams and content designers
  • Inclusion of vulnerability experts in AI planning discussions
  • Development of ethical frameworks specifically addressing citizen trust
  • Cross-departmental sharing of both implementation AND citizen experience insights

As the AI Playbook and related resources evolve, I’d welcome opportunities to contribute the citizen experience perspective alongside the valuable internal readiness work that GDS is undertaking.

Conclusion

Successful human-centred AI in government requires both sides of the equation: internal capability and a focus on the AI citizen experience.

Content design sits at this crucial intersection, translating technological systems into human experiences that feel accessible, supportive and genuinely helpful.

The vision shouldn’t be merely to implement AI efficiently, but to create an AI citizen experience that all people – especially vulnerable ones – can navigate with confidence and dignity.

This means moving beyond the technical questions of how to implement AI to the human questions of how it affects the AI citizen experience with essential services.

As AI increasingly shapes government service delivery, let’s ensure the conversation includes both those building the systems and those designing how citizens will experience them.

Only by addressing both sides can we realise the full potential of AI to create a positive AI citizen experience that transforms public services for the better.

Workplace Insights coach Adrie van der Luijt

Adrie van der Luijt is CEO of Trauma-Informed Content Consulting. Kristina Halvorson, CEO of Brain Traffic and Button Events, has praised his “outstanding work” on trauma-informed content and AI.

Adrie advises organisations on ethical content frameworks that acknowledge human vulnerability whilst upholding dignity. His work includes projects for the Cabinet Office, Cancer Research UK, the Metropolitan Police Service and Universal Credit.