
Workplace Insights by Adrie van der Luijt
There’s a particular kind of exhaustion that comes from building something you know will make people’s lives worse. I’ve felt it. You’ve probably felt it too. It’s that soul-deep tiredness that comes not from hard work but from work that contradicts everything you believe about good practice. I’ve worked on projects like that. It’s frustrating.
This week, the Perkonomics Report put numbers to what we’ve all been feeling: forty-two percent of workers feel undervalued, and thirty-eight percent say using AI at work reduces their sense of personal accomplishment.
Sit with that for a moment. More than a third of us feel like our work matters less because of the very tools we’re implementing.
I’ve spent this week writing about AI and vulnerability, about billions spent on the wrong priorities. Now I need to talk about the people caught in the middle. The content designers being asked to prompt-engineer chatbots that will replace human advisers. The UX writers crafting error messages for systems designed to reject rather than support. The strategists building user journeys that they know will leave vulnerable people behind.
We’re not just building bad systems. We’re burning out the very people who could build better ones.
The disconnect between leadership and reality is staggering. Sixty-nine percent of employers believe AI is improving the employee experience. Meanwhile, sixty percent of employees say feeling undervalued has harmed their mental health. That’s not a perception gap. That’s wilful blindness. It’s leadership so removed from the actual work that they can’t see their workforce drowning.
I think about my own journey through digital transformation, from those early days in 1987 when we were all genuinely excited about possibilities, through to now, when I watch talented practitioners leave the field entirely because they can’t stomach another day of building hostile systems. The technology has advanced exponentially. Our treatment of the people implementing it has gone backwards.
The report shows that thirty-four percent of employees are likely to look for a new role within the next year. Among those who feel undervalued, it jumps to fifty-four percent. We’re haemorrhaging the exact people who understand both the technology and the human impact. The ones who could build trauma-informed systems if anyone would let them. The ones who know the difference between efficiency and cruelty.
But here’s what really gets me: we’re asking content practitioners to implement AI whilst simultaneously telling them their expertise doesn’t matter anymore. Write prompts for the chatbot that will replace the call centre workers. Design forms for the AI that will auto-reject benefits claims. Create user journeys for systems that treat human complexity as edge cases to be eliminated.
And then we wonder why these systems lack empathy.
You can’t build compassionate technology with a demoralised workforce. You can’t create trauma-informed design when your designers are traumatised by their own work environment. You can’t serve vulnerable users when the people building the systems feel fundamentally undervalued.
I remember a conversation with a content designer last month. She’d been asked to write messaging for an AI system that would handle benefits appeals. The system was designed to reject X percent of appeals automatically. She knew, we all knew, that many of those rejections would be wrong. That behind each rejection was a person in crisis who’d now have to navigate another layer of bureaucracy. She wrote the messages. What choice did she have? But she told me she cries in her car after work now.
This is what we’re doing to our workforce. We’re asking them to be complicit in building systems they know cause harm, then wondering why engagement is plummeting. We’re requiring them to implement AI that reduces their own sense of worth, then expecting them to somehow infuse these systems with human understanding.
The report mentions that recognition is the number one driver of feeling valued, according to fifty-three percent of employees. But how do you recognise someone for building a system that makes vulnerable people’s lives harder? How do you celebrate the successful implementation of a chatbot that you know will leave people feeling unheard? The whole recognition framework assumes the work has positive value. When it doesn’t, recognition becomes another form of gaslighting.
The generational divide is particularly painful to watch. Younger practitioners entering the field with genuine enthusiasm about using technology to help people, only to discover they’re actually building walls, not bridges. Experienced practitioners who know better but feel powerless to change course. Middle managers caught between impossible targets and their team’s wellbeing. Everyone knows the emperor has no clothes, but we’re all too exhausted or scared to say it.
What really breaks my heart is seeing talented content strategists reduced to prompt engineers. These are people who understand information architecture, user psychology, accessibility, plain English and trauma-informed design. They could be building systems that actually serve users. Instead, they’re writing scripts for chatbots that frustrate everyone who encounters them. They’re creating content for AI tools that make decisions they don’t understand based on logic they can’t explain.
The mental health impact is real. Sixty percent of employees who feel undervalued say it’s harmed their mental health. That’s not just statistics. That’s people developing anxiety about work they used to love. That’s experienced practitioners questioning their entire career. That’s talented humans being ground down by systems that see them as resources to be optimised rather than experts to be valued.
I’ve been in this field long enough to remember when digital transformation meant possibility. When we genuinely believed we could make services more accessible, more human, more responsive to need. Some of us still believe it. But we’re fighting against systems that prioritise everything except the humans involved, whether they’re building the technology or using it.
I have worked on everything, from Universal Credit to victim-centred services for the Met Police, from crisis grants for the Cabinet Office to the Cancer Research UK website. Everyone I worked with shared my passion to create services that make life better for everyone, the target audience as well as the employees at the other end of the service. It has taken me years to get to the point where my work can make a real difference to people at their most vulnerable. AI has the opportunity to scale empathy, as I recently wrote on Kristina Halvorson’s Button blog. But only if we understand AI isn’t going to do that unless we also invest in the people behind the technology.
The truth is, the people who could build brilliant, compassionate, trauma-informed AI systems are already here. They’re in your organisation right now. They’re the ones questioning whether that error message might trigger someone. They’re the ones suggesting user research with vulnerable groups. They’re the ones pointing out that efficiency without empathy is just cruelty at scale. But they’re also the ones being told their concerns slow down delivery. Their expertise is seen as resistance to change rather than essential insight.
We keep talking about AI transformation like it’s inevitable, like we have no choice in how it happens. But every system is designed by humans, implemented by humans, maintained by humans. When those humans feel undervalued, unheard, and increasingly unwell, what exactly do we think they’re building? You can’t extract compassionate technology from an uncompassionate workplace.
The path forward isn’t complicated, but it requires something most organisations won’t give: genuine respect for the people doing the work. Listen to your content practitioners when they tell you a system will cause harm. Value their expertise in human communication, not just their ability to operate new tools. Recognise that building ethical, trauma-informed systems takes time, research and deep understanding of user needs.
Most importantly, stop asking people to build systems that contradict their professional values and personal ethics, then wondering why they’re disengaged. The cognitive dissonance of building harmful systems whilst being told you’re driving innovation is destroying your workforce from the inside out.
The Perkonomics Report shows employers think technology is the answer. But technology without humanity is just expensive cruelty. We need to value the humans in the loop, whether they’re building the systems or using them. Until we do, we’ll keep building AI that fails everyone it touches, starting with the people forced to create it.
The real tragedy isn’t that we’re building bad systems. It’s that we have all the expertise we need to build good ones, sitting right there in our demoralised workforce, if only we had the wisdom to listen.

Adrie van der Luijt is CEO of Trauma-Informed Content Consulting. Kristina Halvorson, CEO of Brain Traffic and Button Events, has praised his “outstanding work” on trauma-informed content and AI.
Adrie advises organisations on ethical content frameworks that acknowledge human vulnerability whilst upholding dignity. His work includes: