Workplace Insights by Adrie van der Luijt

building trust in AI government services

A content designer's perspective

A government content designer explores how to build trust in AI government services through trauma-informed approaches, transparency and ethical design principles.

As a content designer working in government digital services since 2012, I’ve witnessed firsthand how challenging it can be to build trust between citizens and the state through digital interactions.

Now, as artificial intelligence begins to reshape public sector delivery, we face a profound new challenge: how do we maintain authenticity and empathy when people who have never fully trusted government websites are increasingly interacting with AI government services?

This isn’t merely a theoretical question. It strikes at the heart of how government fulfils its responsibility to citizens, particularly those who are vulnerable, in crisis or already marginalised by digital systems.

Lessons from building trust in government digital (2012-present)

My journey through government digital services has taught me valuable lessons about overcoming mistrust that remain relevant as we enter the AI era.

Universal Credit: overcoming data anxiety

Working on Universal Credit, I encountered deep-seated mistrust around sharing personal data with government websites.

Our response was three-fold: we drastically cut the number of questions asked, put everything in plain English, and were transparent about why we needed specific information.

The key insight? People will share personal information when they understand its purpose and can see how it directly benefits them.

This principle becomes even more critical as AI government services begin processing ever-larger amounts of citizen data.

Met Police: signalling that citizens are heard

At the Met Police, we faced a different trust challenge. People reporting suspected drink spiking often felt that police didn’t take such crimes seriously.

Our content design approach focused on creating a tone of voice that signalled the police were taking these reports seriously as crimes in their own right.

Victim support organisations praised this approach specifically because it addressed the underlying concern: “Will I be believed?”

This question becomes even more pressing when AI enters the picture. If people doubt they’re being heard by human officers, how will they feel about algorithmic processing?

Cabinet Office: the balancing act

Another instructive example comes from the Cabinet Office, where we needed to request information to detect potential fraud without creating anxiety or alerting applicants to our fraud detection methods.

This required a delicate balance: gathering necessary data while maintaining a trusting relationship.

This balancing act foreshadows the challenges of explaining AI-powered fraud detection systems, which must be effective without creating a sense of surveillance or suspicion.

Explaining complex decision-making

When approaching the challenge of explaining AI government services to already mistrustful users, I’m struck by the similarities to my work with the Met Police on suspected drink spiking reports.

In both cases, the key lies in centring user needs rather than institutional processes.

Put user needs first, explanations second

With suspected spiking victims, we discovered that focusing first on getting them the medical support they needed created space for trust to develop.

Only then could we effectively guide them through the reporting process.

I suspect the same principle applies when introducing AI government services. Users don’t primarily care about the technical details of how a system works; they care about getting the help they need efficiently and being treated with dignity.

Content design for AI-powered services should therefore:

1. Lead with how the technology helps users achieve their goals faster (“This system helps us process your application in minutes instead of weeks”)
2. Then provide appropriate transparency about AI involvement (“We use automated systems to check your eligibility”)
3. Always emphasise human oversight where it exists (“Our team reviews all decisions”)


Transparency without overwhelming

The Universal Credit service taught me valuable lessons about providing transparency without creating anxiety.

We found that explaining why we needed certain information – without going into excessive technical detail – built trust more effectively than either complete opacity or overwhelming explanations.

With AI government services, this becomes even more crucial. Users need to understand what’s happening with their data and how decisions are being made, but excessive technical jargon about algorithms creates cognitive load without building trust.

Consider these contrasting approaches:

❌ “Our advanced machine learning algorithm analyses 47 data points using a neural network to determine eligibility with 94% accuracy.”

✅ “We use a digital system to check your information against our eligibility criteria. This helps us process your application faster so you can get support more quickly.”

The right moment for explanation

Another lesson from crisis communications: timing matters. When someone is anxious about their benefits or reporting a crime, they’re not in the ideal mental state to absorb complex information about technology.

Layer explanations about AI in a way that respects the user’s journey:

  • Initial focus: What they need to do and how it helps them
  • Secondary layer: Basic transparency about automated systems
  • Deeper layer: More detailed explanations for those who want them
  • Always available: Clear information about how to speak to a human


This approach acknowledges that trust is built progressively, not all at once.

Balancing automation and human elements

While I haven’t yet worked directly on AI government services balancing automation with human oversight, I’ve witnessed firsthand the challenges of what we might call the “system versus human” dynamic.

Users of Universal Credit often felt powerless against an anonymous system that decided how much and when they would get paid. This is a classic “computer says no” scenario that bred frustration and mistrust.

AI potentially removes even the last elements of human interaction from the user journey, which creates a profound challenge for trust. Ironically, this comes at a time when AI systems are actually becoming more capable of delivering personalised, accurate assessments than ever before.

This creates a fascinating paradox: AI is likely better able to assess situations, analyse benefits entitlements and work out relevant options within seconds, yet the public is inclined to mistrust computer decisions as inherently unfair or impersonal.

The empathy balancing act

Content designers face a delicate balance in this new landscape. We need to:

1. Develop a humane and empathetic tone of voice for AI applications to build trust
2. Yet not mislead people about whether they’re dealing with AI or an actual person
3. Communicate the benefits of automated processing without dismissing the value of human judgement

This isn’t merely a stylistic concern; it’s an ethical one. When people feel they’re talking to a human but discover they’ve been interacting with AI, the sense of deception can permanently damage trust.

Yet completely mechanical interactions can feel cold and uncaring, especially when dealing with vulnerable people in crisis.

Signposting the human touch points

Perhaps the most important lesson I’ve learned from working on high-stakes government services is the value of clearly signposting when and how users can access human support.

Even in highly automated systems, content needs to:

  • Clearly indicate when users are interacting with automated systems
  • Provide straightforward pathways to human assistance
  • Explain the roles of both automated systems and human oversight
  • Set realistic expectations about response times and process steps

Humanising ‘the system’

While we shouldn’t anthropomorphise AI, we can humanise the overall service by focusing on the people behind it:

  • Who built this service and why?
  • What values guided its development?
  • Who reviews outcomes and handles exceptions?
  • How does feedback improve the system?

By acknowledging both the power of automation and its limitations, we can build services that harness AI’s capabilities while maintaining the human connection that builds genuine trust.

Designing for vulnerable users

When designing AI government services for vulnerable users, we face what I’ve come to think of as the “trust tightrope”.

My working assumption, based on years of designing content for sensitive services, is that vulnerable people will feel misled – even betrayed – if they discover they were not talking to a human but to a cleverly trained AI application. This revelation can erode already fragile trust.

Yet the alternative presents its own problems. If we give AI a less human, more robotic tone of voice, users will likely feel alienated. This approach risks reinforcing the perception that they’re dealing with a heartless, anonymous syste, exactly the feeling that has driven mistrust in government digital services for years.

Principles for this new territory

Based on my experience with vulnerable users in high-stakes government services, I believe several content principles will become essential when designing AI interactions:

  1. Transparent identity from the start
    Rather than revealing the AI nature of an interaction after the fact, establish clarity from the beginning. This doesn’t mean starting with technical explanations, but rather with simple signposting:

    ❌ “Hi, I’m Sarah. How can I help you today?” (When ‘Sarah’ is actually an AI)

    ✅ “Welcome to the Universal Credit digital assistant. I’m here to help answer your questions and guide you through the application process.”

  2. Focus on capabilities, not personality
    Vulnerable users primarily need systems that can effectively help them, not systems that pretend to be human. Content should emphasise what the AI can do for them rather than creating a pseudo-human personality:

    ❌ “I understand how frustrating this situation must be for you.”

    ✅ “I can help you check your eligibility, submit your application, or connect you with a support advisor. What would be most helpful right now?”

  3. Clear escalation paths to humans
    For vulnerable users especially, knowing they can access human support creates a crucial safety net. Content should make these pathways obvious and accessible:

    “At any point, you can type ‘speak to an advisor’ to connect with the support team. Advisors are available Monday to Friday, 8am to 6pm.”

  4. Appropriate emotional intelligence
    This is perhaps the most delicate balance. AI systems can recognise emotional cues and respond appropriately without pretending to have emotions themselves:

    ❌ “I feel terrible that you’re going through this difficult time.”

    ✅ “You’ve mentioned this is a difficult time. Would you like information about additional support services that might help?”

  5. Consistency in identity and limitations
    Being consistent about the AI’s capabilities and limitations builds trust more effectively than overpromising:

    “I can help with application questions, but I don’t have access to your specific case details. For that, you’ll need to speak with an advisor.”


Learning from vulnerable users

Perhaps most importantly, we need to continuously test these interactions with vulnerable users themselves.

My experience with Universal Credit and Met Police services taught me that assumptions about what builds trust often need revision when confronted with real user feedback.

The goal isn’t to create AI that perfectly mimics humans, but rather AI that complements human services in a way that is honest, helpful and respectful of users’ intelligence and dignity.

Above all, it needs to explain to users what their options are, but put them in charge of what they want to do next – rather than trap them in a death spiral of ‘computer says no’ or ‘your selection has not been recognised’.

Trauma-informed considerations for AI government services

When discussing AI government services, we must confront a fundamental question: if a victim wants to report domestic abuse, are they truly better served by AI or by humans?

This question gets to the heart of trauma-informed approaches to public services.

My work with the Met Police on sensitive crime reporting has taught me that trauma creates specific needs that technology alone may struggle to address.

Victims of domestic abuse, sexual violence or other traumatic experiences often need:

  • To feel genuinely heard and believed
  • To see recognition of their individual circumstances
  • To know that someone cares about their specific situation
  • To have flexibility in how they share their experiences
  • To feel safe during vulnerable disclosures

These needs are profoundly human and relational. While AI can potentially reduce waiting times and provide immediate responses, we must ask: would victims truly confide in AI applications, or would they feel that no one is listening?

Appropriate boundaries for AI in trauma contexts

This doesn’t mean AI has no place in trauma-informed services. Rather, content designers must establish appropriate boundaries for AI use:

Appropriate for AI:

  • Initial information provision about available services
  • Navigation assistance to guide users to appropriate support
  • Simple administrative tasks that reduce friction
  • Optional preliminary information gathering (with clear explanation of purpose)
  • Follow-up reminders and updates

Requiring human involvement:

  • Disclosure of traumatic experiences
  • Emotional support during crisis
  • Complex safety planning
  • Nuanced risk assessment
  • Building relational trust over time


Hybrid approaches for trauma-informed services

The most promising path forward may be thoughtfully designed hybrid approaches where AI and human services complement each other. Content design plays a crucial role in creating these models by:

Clearly distinguishing AI-supported from human-supported parts of the journey

  1. Using AI to reduce administrative burden so human staff can focus on meaningful connection
  2. Ensuring AI systems recognise trauma indicators and provide appropriate pathways to human support
  3. Creating content that acknowledges the emotional weight of disclosure without pretending to understand it

Ethical considerations for content designers

When working on trauma-informed services that incorporate AI, content designers face specific ethical questions:

How do we avoid creating a two-tier system where only those with digital skills get prompt service?

  • How do we ensure that efficiency doesn’t become more important than emotional safety?
  • How do we design for moments when AI might fail to recognise trauma cues?
  • How do we maintain appropriate boundaries while still creating supportive interactions?

These questions require ongoing dialogue between content designers, service designers, trauma specialists, and – most importantly – those with lived experience of trauma interacting with public services.

Conclusion: designing AI government services for the future

As AI government services become increasingly common, content designers stand at a critical intersection. We’re tasked with building trust in systems that many citizens approach with inherent skepticism.

We’re responsible for humanising interactions while remaining honest about their automated nature. And we’re challenged to create trauma-informed pathways that respect the profound human needs of the most vulnerable service users.

This is uncharted territory. While my experience with Universal Credit, Met Police reporting systems, and Cabinet Office services provides valuable principles to guide us, the honest truth is that we’ll need to develop new approaches specifically for the AI era.

What seems clear is that transparency, user-centred design and ethical considerations will be even more important in AI government services, not less. And perhaps most critically, we’ll need to continuously test and refine our approaches based on real user feedback, particularly from those who have historically been least served by digital government.

The future of content design in AI government services isn’t about writing clever prompts for chatbots. It’s about designing responsible, transparent, and genuinely helpful systems that maintain the human connection at the heart of public service.

Workplace Insights coach Adrie van der Luijt

Adrie van der Luijt is CEO of Trauma-Informed Content Consulting. Kristina Halvorson, CEO of Brain Traffic and Button Events, has praised his “outstanding work” on trauma-informed content and AI.

Adrie advises organisations on ethical content frameworks that acknowledge human vulnerability whilst upholding dignity. His work includes projects for the Cabinet Office, Cancer Research UK, the Metropolitan Police Service and Universal Credit.