Workplace Insights by Adrie van der Luijt

Trauma-informed content and AI

AI isn't replacing empathy in content design - it's amplifying it

Trauma-informed content can be produced with the help of AI tools without risking secondary trauma to the content designer or content strategist.

The digital content world has a new false dichotomy: AI versus human empathy. As someone who’s built trauma-informed content services like Police.uk’s spiking information portal and rewritten 5,000 pages of cancer content for Cancer Research UK, I can confidently say this framing misses the point entirely.

The question isn’t whether AI can replace human empathy, but how it transforms our ability to deploy empathy at scale without destroying ourselves in the process.

Learning empathy through constraints

My journey to creating trauma-informed content didn’t start in a UX lab. It began in local newspaper newsrooms and Dutch radio, where I had precisely three minutes to get an interviewee to open up about something meaningful. Those constraints taught me something crucial: empathy isn’t just an innate quality. It’s a skill built through deliberate practice and methodology.

Later, as an executive assistant anticipating leaders’ needs before they arose, and launching major online portals where understanding user behaviour was paramount, I refined these empathy skills into systematic approaches. At Director of Finance Online and SME Web, identifying that precise window of opportunity to gain a reader’s attention – be it an early morning newsletter or knowledge repository – required methodical understanding of user contexts.

This systematic approach to empathy is exactly why AI can help us create better trauma-informed content.

The false empathy monopoly

The idea that humans have a monopoly on creating empathetic content is fundamentally flawed. The measure of trauma-informed content isn’t who created it; it’s whether it:

  1. Recognises trauma’s impact
  2. Puts users in control
  3. Avoids retraumatisation
  4. Creates psychological safety

My Police.uk spiking information service didn’t succeed because I personally experienced spiking. It succeeded because I applied rigorous design principles and deeply understood psychological safety needs, principles that can be systematised, taught and yes, encoded into AI prompts.

Where AI actually excels

The uncomfortable truth: AI can absolutely create trauma-informed content when directed properly. Here’s how:

1. Research acceleration without burnout

I experienced burnout researching domestic abuse. The emotional toll of immersing yourself in traumatic content for weeks is real and devastating. AI transforms this process by:

  • Processing thousands of research papers, victim testimonies and clinical guidelines in hours
  • Identifying patterns in trauma responses across different populations
  • Summarising research without requiring humans to marinate in traumatic content

AI doesn’t get traumatised. It doesn’t need therapy after reading 500 accounts of pets being violently attacked to intimidate domestic abuse victims. This preservation of mental health alone is revolutionary for those of us in sensitive domains.

2. Testing different approaches simultaneously

When I rebuilt the Ofsted childcare registration service that had young women in tears, finding the right tone required extensive iteration. AI enables us to:

  • Generate multiple versions with different empathy approaches
  • Test variations in language around agency and control
  • Rapidly prototype different navigation paths for traumatised users

With AI, I can create 20 variations of a sensitive flow and evaluate them, rather than just having bandwidth for 2-3.

3. Consistency across massive services

Maintaining consistent trauma-aware language across the 5,000 Cancer Research UK pages I rewrote would have been nearly impossible without systematic approaches. AI ensures:

  • Consistent application of trauma-informed principles
  • Elimination of accidentally triggering language
  • Uniform navigation patterns that create predictability (crucial for trauma)

Yes, human can do it. But it takes a lot longer. I know it does. I did it.

The real human advantage: direction, not creation

Where humans remain essential isn’t in writing every word. It’s in:

  1. Setting the guardrails: Defining what sensitive, trauma-informed content looks like
  2. Validating with lived experience: Testing with actual affected communities
  3. Making ethical judgments: Deciding how to balance different sensitive needs

When I created the Universal Credit website, building trust despite massive institutional distrust required constant calibration between clarity, respect and authority. These judgement calls remain human territory, while execution can be AI-augmented.

Practical steps for hybrid trauma-informed content

Here’s how to actually implement this effectively:

  1. Create trauma-informed content principles first, then train AI to recognise and apply them
  2. Use AI to research and process traumatic material, protecting your mental health
  3. Generate multiple versions of sensitive content flows with AI
  4. Test with actual users who have lived experience
  5. Use AI to scale the validated approaches across entire services
  6. Implement consistent trauma-aware language using AI governance

The burnout prevention tool

Perhaps the most powerful and underdiscussed benefit: AI helps content strategists avoid secondary trauma. When I wrote for Cancer Research UK, I absorbed countless cancer stories. When working on domestic abuse and drink spiking content, I internalised horrific narratives.

AI can process this material without developing PTSD. It can summarise research without needing therapy. This isn’t a small benefit; it’s the difference between sustainable careers in sensitive domains versus burnout.

Finding the windows of opportunity

Just as I discovered at SME Web that identifying the small windows of opportunity was crucial, the same applies to AI in trauma-informed content. The window isn’t replacing human judgment. It’s augmenting our ability to apply that judgement consistently across massive services, testing more variations and protecting our mental health while doing deeply emotional work.

The way forward

The future belongs to content strategists and content designers who:

  1. Understand trauma-informed principles deeply
  2. Know how to translate those principles into AI prompts and guidance
  3. Can validate outputs with lived experience communities
  4. Use AI to scale what works across massive services

AI isn’t replacing empathy in content. It’s finally giving us the tools to apply empathetic principles at scale without destroying ourselves in the process.

And honestly, those claiming AI can’t do this work are really saying they don’t know how to use AI effectively yet. The divide isn’t between humans and machines. It’s between practitioners who understand this new paradigm and those clinging to outdated methods.

The question isn’t whether AI can be empathetic. It’s whether we’ll use AI to make empathy scalable, sustainable and more impactful than ever before.

Workplace Insights coach Adrie van der Luijt

Adrie van der Luijt is CEO of Trauma-Informed Content Consulting. Kristina Halvorson, CEO of Brain Traffic and Button Events, has praised his “outstanding work” on trauma-informed content and AI.

Adrie advises organisations on ethical content frameworks that acknowledge human vulnerability whilst upholding dignity. His work includes projects for the Cabinet Office, Cancer Research UK, the Metropolitan Police Service and Universal Credit.