
Workplace Insights by Adrie van der Luijt
In 2022, I spent months researching domestic violence and drink spiking for the Metropolitan Police’s national services. Reading case file after case file. Witness statements. Evidence logs. The weight of other people’s trauma, day after day, until I couldn’t carry any more.
Burnout and secondary trauma aren’t abstract concepts when you’ve lived them. They’re the moment you realise you can’t do this work the way you’ve been doing it anymore.
That experience changed how I think about content work, about exposure to harm, about the invisible labour that keeps systems running. It’s why I now specialise in using AI responsibly to protect researchers from the kind of trauma I experienced, using technology to shield people from repeated exposure to harmful content while still getting the work done.
But what keeps me up at night is that the people building the AI systems that could protect researchers are themselves experiencing the exact trauma we’re trying to prevent. And they’re warning their families to stay away from what they’re building.
Content moderators working eight-hour shifts reviewing child sexual abuse material for £2 an hour are experiencing PTSD, panic attacks and relationship breakdowns, according to an article in The Guardian last week.
Data annotators tasked with teaching AI what counts as racist language are making split-second judgement calls that get amplified to millions of users, knowing they’re likely getting it wrong.
Google raters evaluating medical advice are warning their 10-year-olds never to use chatbots because they’ve seen how bad the data going in really is.
An AI worker on Amazon Mechanical Turk, a marketplace that allows companies to hire workers to perform specific tasks, just spent 2 years helping train models, only to ban all generative AI from her home and actively discourage friends from using it.
These aren’t isolated anecdotes. They’re data points in a pattern financial services firms need to recognise: the people closest to how AI systems actually work are warning their loved ones to stay away from the technology. When the insiders don’t trust what they’re building, that’s not a technology problem, but a trauma problem, an ethics problem and, increasingly, a regulatory compliance problem.
A recent study of 113 data labellers and content moderators across Kenya, Ghana, Colombia and the Philippines documented over 60 cases of serious mental health harm, including PTSD, depression, insomnia, anxiety and suicidal ideation. Workers described panic attacks, chronic migraines and symptoms of sexual trauma directly linked to the graphic content they review daily.
One Kenyan moderator said she could no longer go on dates, haunted by the sexual violence she was forced to view. Richard Mathenge, who trained OpenAI’s GPT model for nine hours a day, five days a week, found himself scarred by the work. Today, he and his team remain stuck with the explicit content they repeatedly viewed and labelled.
This isn’t burnout. This isn’t ordinary workplace stress. This is moral injury: psychological, social and spiritual suffering resulting from witnessing, perpetrating or failing to prevent an act that violates one’s moral beliefs.
The mechanism is straightforward: workers are told their feedback matters while being given vague or incomplete instructions, minimal training and unrealistic time limits. They report problems. Nothing changes. They watch their errors get scaled to millions of users. They carry individual responsibility for systemic failures they can’t control.
Moral injury is inherently relational. Feelings of guilt, shame and anger are directly tied to one’s relationships with others. When you’re making AI “safe” for public use while your own family relationships are fracturing from the trauma of the work, that’s institutional betrayal operating at scale.
The Guardian reported on a dozen AI raters and workers who’ve personally stopped using generative AI and actively discourage family members from using it. What they’re seeing isn’t reassuring.
The numbers tell part of the story. A NewsGuard audit found that between August 2024 and August 2025, chatbot non-response rates dropped from 31% to 0% – meaning they’re now optimised to always give an answer. Meanwhile, their likelihood of repeating false information nearly doubled from 18% to 35%.
That’s not an accident. It’s a design choice: never say “I don’t know” even when you don’t know. From a trauma-informed perspective, this is catastrophic for vulnerable users who need systems that admit uncertainty, not systems that deliver confident bullshit.
It reminds me a lot of when I was responsible for all online content for Universal Credit, the UK government’s flagship welfare reform programme. It was designed by be “digital-by-default”, removing any need for external support.
According to workers interviewed for a Time investigation, 81% of content moderators believe their employer does not do enough to support their mental health. They have 150 minutes per week for “wellness breaks” to cool off after seeing traumatising content, but always have a ticking clock telling them to get back to work.
The incentive structure is clear: ship fast, scale faster, ignore the human costs on both sides of the technology.
Here’s what should worry financial services firms: one Google AI rater who evaluates responses generated by Google Search’s AI Overviews said she tries to use AI as sparingly as possible, if at all, and has forbidden her 10-year-old daughter from using chatbots: “She has to learn critical thinking skills first or she won’t be able to tell if the output is any good.”
Another rater was more direct: “This is not an ethical robot. It’s just a robot.”
These workers see what goes in. They know how the sausage is made. As one put it: “After having seen how bad the data is that goes into supposedly training the model, I knew there was absolutely no way it could ever be trained correctly like that.”
Garbage in, garbage out – except now the garbage is deployed in customer service systems, financial advice tools and vulnerable customer interactions across the sector.
The US Consumer Financial Protection Bureau’s research on chatbots in consumer finance found that poorly deployed systems create customer frustration, reduced trust and violations of consumer protection laws. When chatbots provide inaccurate information regarding consumer financial products or services, it could lead to assessment of inappropriate fees, which could lead to worse outcomes, such as default.
The stakes for being wrong when someone’s financial stability is at risk are high. Recognising and handling disputes is essential. Failing to resolve a dispute can be disastrous for a person.
Recent UK research by the Lending Standards Board found the gap between chatbot and human performance alarming. While 74% of live chat users said they felt their circumstances were understood, just 35% of chatbot users said the same. Meanwhile, 50% of live chat customers felt indicators of vulnerability were addressed, compared to just 31% among chatbot users.
That’s not a technology glitch. It’s a fundamental mismatch between what vulnerable customers need and what the systems can deliver.
A recent Ernst & Young report found that 99% of 975 businesses surveyed have suffered financial losses from AI-related risks, with nearly two-thirds suffering losses of more than $1 million. Insurance companies are now offering specialised coverage for AI failures, including data leaks, discriminatory decisions and, as multiple lawsuits have shown, encouraging self-harm in vulnerable users.
The FCA flagged this in July 2022: using algorithms including machine learning or artificial intelligence, which embed or amplify bias, could lead to worse outcomes for some groups of customers, and might not be acting in good faith for their consumers.
If you’re deploying AI in customer-facing systems, and the people who train those systems are:
Then how, precisely, are you meeting your duty to act in good faith, avoid foreseeable harm and enable customers to pursue their financial objectives?
The workers’ experience maps directly to user vulnerability. When 89% of excellent mystery shopping experiences were linked to webchats allowing free-flowing dialogue, while 74% of poor experiences were associated with fully templated interactions, you’re seeing the same rigid systems that frustrate vulnerable workers, creating worse outcomes for vulnerable customers.
Content moderators and AI workers have formed the Global Trade Union Alliance of Content Moderators to demand eight protocols, including limiting exposure to traumatic content, elimination of unrealistic quotas, 24/7 mental health support for at least two years after leaving their jobs, living wages, workplace democracy and the right to unionise.
The fact that they need to organise internationally just to get basic occupational safety standards tells you everything about how the AI industry treats the invisible workforce keeping systems running.
In countries where mental health care infrastructure is severely under-resourced, the burden is pushed onto overworked public systems and households. The psychological toll isn’t incidental. It’s the predictable outcome of an industry structured around outsourcing, speed, surveillance and extracting invisible labour under extreme conditions.
Moral injury research shows that only addressing individual psychology isn’t enough. We need ethics and context-informed approaches that acknowledge mental health extends beyond intra-individual factors to encompass relational impacts, including conflict, betrayal and social alienation.
When workers report problems and nothing changes, that’s not poor communication, but institutional betrayal. When they’re essential to the system but treated as disposable, that’s structural violence. When they’re responsible for outcomes but powerless to fix the underlying problems, that’s the recipe for chronic moral injury.
Start asking harder questions:
Brook Hansen, who’s trained some of Silicon Valley’s most popular AI models since 2010, puts it simply: “We’re expected to help make the model better, yet we’re often given vague or incomplete instructions, minimal training and unrealistic time limits to complete tasks. If workers aren’t equipped with the information, resources and time we need, how can the outcomes possibly be safe, accurate or ethical?”
The gap between what’s expected and what’s actually possible isn’t a training problem. It’s the business model.
As one worker put it, once you’ve seen how these systems are cobbled together – the biases, the rushed timelines, the constant compromises – “you stop seeing AI as futuristic and start seeing it as fragile”.
Fragile systems built on precarious labour create fragile outcomes for vulnerable users. That’s the same as when I worked on any digital transformation project as a minimum viable project (MVP) yet with insufficient support and buy-in and based on false promises to make it better at some vague point in the future. When the people making those systems won’t let their own teenagers use them, that’s not technophobia but informed risk assessment from people with privileged information.
Financial services firms deploying AI to vulnerable customers are making a bet: that the insiders are wrong or overreacting or being unnecessarily cautious with their families.
That’s not a bet you want on your regulatory record when the FCA comes asking how you assured yourself your AI deployment met Consumer Duty obligations.
Mophat Okinyi, who worked on ChatGPT’s safety, said repeated exposure to explicit text led to insomnia, anxiety, depression and panic attacks. His wife left him. “However much I feel good seeing ChatGPT become famous and being used by many people globally, it has destroyed my family. It destroyed my mental health. As we speak, I’m still struggling with trauma”.
That’s the human cost of the “safe AI” you’re deploying. Someone carried that trauma so your customers could get confident answers, accurate or not, from a chatbot.
Most workers are legally barred from speaking out by NDAs, with 75 out of 105 workers approached in Colombia declining interviews, and 68 out of 110 in Kenya, overwhelmingly due to fear of violating non-disclosure agreements.
The people who know the most are silenced. The people deploying the systems trust the marketing materials. The vulnerable customers bear the consequences.
And when things go wrong – when the chatbot gives terrible financial advice, when it fails to recognise a customer in crisis, when it compounds existing vulnerabilities – who’s accountable?
Not the workers who tried to flag problems and got ignored. Not the vendors who optimised for engagement over accuracy. You’re accountable. Your firm is on the hook for Consumer Duty compliance.
The invisible workforce building AI can’t protect your customers because they can’t even protect themselves or their families. That should tell you everything you need to know about whether you’re ready to deploy these systems at scale.
The insiders are warning their families. Are you listening?

Adrie van der Luijt is CEO of Trauma-Informed Content Consulting. Kristina Halvorson, CEO of Brain Traffic and Button Events, has praised his “outstanding work” on trauma-informed content and AI.
Adrie advises organisations on ethical content frameworks that acknowledge human vulnerability whilst upholding dignity. His work includes: