
Workplace Insights by Adrie van der Luijt
When Press Gazette recently exposed dozens of national news articles quoting “experts” who don’t exist, I felt physically ill. These weren’t just minor mistakes, but fabricated sources with AI-generated credentials and entirely manufactured quotes that had sailed past editorial checks at major publications.
Having spent years working with content teams across sectors from government to healthcare, this struck me as the early warning we all needed. Because what’s happening in newsrooms right now previews exactly what’s coming for every content strategy team that embraces AI without proper guardrails.
The hard truth about AI content strategy is that these tools don’t just make content creation more efficient, but fundamentally threaten content integrity in ways most organisations haven’t begun to address. And if seasoned journalists with decades of training in verification are getting caught out, what hope do ordinary content teams have?
The journalism industry has become our canary in the coal mine. Press Gazette’s investigation exposed how journalist-request platforms have become breeding grounds for fake experts with AI-generated credentials providing quotes that end up in trusted publications.
One national publisher was so alarmed they contacted all their freelancers with warnings. The BBC removed an article entirely from their archive after discovering it quoted someone who simply didn’t exist.
This isn’t just a journalism problem, but an AI content strategy dilemma that’s already affecting every sector I work with. At the Metropolitan Police, we caught multiple instances where AI drafts invented policies that sounded plausible but simply didn’t exist. At TOTSCo, an AI-generated technical document included fabricated regulatory requirements that could have created serious compliance issues.
The BBC’s recent study of AI content assistants confirmed what many of us working in content strategy have seen firsthand. They granted access to their articles to four major AI assistants – ChatGPT, Copilot, Gemini and Perplexity – and found over half of the AI-generated answers contained significant issues.
Nearly one in five responses introduced factual errors, including wrong dates, incorrect numbers and events that never happened. Most troublingly, 13% of quoted material was either fabricated entirely or altered from the original source.
For content strategists implementing AI workflows, these findings aren’t theoretical. They’re practical challenges we face daily. Every time we use AI to generate content, we risk introducing errors that undermine the very trust we’re trying to build.
“It’s a reminder why the old-school method of picking up a phone and talking to someone, or better still talking to them face to face, remains the most failsafe method of guaranteeing the validity of an expert,” noted former Daily Mirror special correspondent Tom Parry in response to the Press Gazette investigation.
This sentiment echoes what I’ve learned implementing AI content workflows across different organisations. The most robust AI content strategy isn’t about replacing human expertise, but about using AI to amplify it.
At the Cabinet Office, we developed an AI content strategy that followed what Paul Doyle at Press Gazette calls the “AI sandwich” approach:
“AI serves as an initial processing tool from an instructed prompt. It summarises, organises or assists in research before a human editor refines, verifies and contextualises the content. Finally, AI can then be used once more in the post-processing phase for accessibility improvements, such as translations or formatting before again returning for human review.”
This structured approach to AI content workflows proved essential for maintaining integrity. The verification stage is where most AI-related disasters happen. At Cancer Research UK, we insisted on a rigid verification protocol for any AI-assisted content, requiring separate fact-checking of every assertion and cross-checking against primary sources.
As a senior content strategist, I typically work very closely with a subject matter expert. Nevertheless, I put in a lot of time and effort to conduct my own research and verify information. Even subject matter experts can make mistakes or have blindspots, in my experience.
Yes, this slowed the process compared to the magical overnight content generation that some had hoped for, but it prevented potentially catastrophic misinformation about cancer treatments and research. Imagine if I had relied on AI when I wrote the national drink and needle spiking advice and information service for Police.UK – and it had fabricated crucial safeguarding details or legal frameworks?
The Economist’s president Luke Bradley-Jones recently revealed their strategy for the AI era, focusing on building what he calls a “moat” to defend against AI content aggregation.
“The LLM and AI ecology is evolving very, very quickly, and it’s going to be extremely hard for any brand to get a foothold within that ecology,” Bradley-Jones explained. “Actually the most important thing is to work out how you must survive outside of that ecology.”
For The Economist, this means doubling down on their areas of genuine differentiation – geopolitics, defence, technology and economics – where they have deep expertise that can’t be easily replicated by AI.
This approach offers a template for content strategists implementing AI content workflows. The era of generic, commodity content is over. AI can generate basic explainers and simple listicles at scale and at virtually no cost. The only sustainable path forward is to focus relentlessly on content that AI cannot easily replicate.
When implementing AI content strategy at TOTSCo, we shifted resources from content generation to expert interviews and first-hand research specifically because this created material that couldn’t be easily replicated by AI. The result was not just more trustworthy content but also more distinctive and valuable content that strengthened their position as industry experts.
Based on what we’re seeing in journalism and my own experience implementing AI in content teams, here are the principles that will determine which content strategists thrive and which get replaced:
As former Evening Standard CEO Paul Kanareck notes, AI could actually redirect investment from bloated IT departments back into content creation:
“Digital transformation may have simply replaced the legacy costs of print distribution with those of digital infrastructure, but leading to a similar outcome: fewer resources being allocated to actual journalists and journalism.”
The same pattern is emerging in content strategy. As AI simplifies technical processes, organisations are beginning to rebalance their structures, focusing on excelling in content rather than engineering. One client recently reallocated budget from a planned CMS upgrade to fund a team of specialist content creators, recognising that their competitive advantage lies in content quality, not technical infrastructure.
For content strategists, this means our role is evolving from production to orchestration. The highest value will be in prompt engineering (crafting the perfect instructions) and verification/editing (ensuring quality and accuracy), not in the basic generation that AI handles most efficiently.
When I trained my content team to use AI content tools, the most valuable skill wasn’t technical proficiency but critical thinking, the ability to evaluate outputs against user needs, brand voice and ethical standards. The best content people became brilliant editors rather than mere generators. Perhaps that’s also why so many brilliant content strategists come from a journalism background.
What’s becoming increasingly clear is that we’re heading for an integrity crisis in content. As AI generates more material with less oversight, the gap between what’s trusted and what’s not will widen dramatically.
For content strategists, this integrity gap represents the single biggest opportunity of the AI era. Organisations that can consistently deliver accurate, verified, genuinely valuable content will stand out in a sea of AI-generated mediocrity.
When I worked on complex government digital services like Universal Credit, we operated under the principle that errors could literally cost people their livelihoods. That level of responsibility demanded verification processes that might seem excessive in commercial content.
In the age of AI, those same rigorous approaches are becoming essential for anyone who wants their content to be trusted.
The crisis in journalism offers content strategists a preview of what’s coming for all content: a world where verification becomes the central challenge and core skill. Those who master it will thrive. Those who don’t may find themselves creating content that looks impressive but ultimately undermines the very trust they’re trying to build.
Because ultimately, content strategy isn’t just about creating effective content. It’s about creating content that deserves to be trusted.
In an age when AI can fabricate information with frightening plausibility, earning that trust requires more rigour, more humanity, and more integrity than ever before.
Adrie van der Luijt is CEO of Trauma-Informed Content Consulting. Kristina Halvorson, CEO of Brain Traffic and Button Events, has praised his “outstanding work” on trauma-informed content and AI.
Adrie advises organisations on ethical content frameworks that acknowledge human vulnerability whilst upholding dignity. His work includes projects for the Cabinet Office, Cancer Research UK, the Metropolitan Police Service and Universal Credit.