Workplace Insights by Adrie van der Luijt

AI fake experts in journalism

How a business editor's instinct remains our best defence against AI-generated nonsense

As AI fake experts increasingly infiltrate mainstream journalism, those who've worked under extreme deadline pressure understand precisely why journalism verification is failing and why human judgement remains our best defence against AI-generated nonsense.

Last week, Press Gazette published a troubling investigation revealing how major news outlets have been duped into quoting AI-generated “experts” who don’t actually exist.

As someone who’s operated on both sides of the media equation – writing PR commentary that appeared on the Financial Times website within twenty minutes, and producing breaking financial news stories with just ten minutes to deadline – this revelation about AI fake experts doesn’t surprise me in the slightest.

The modern digital newsroom is a pressure cooker designed for speed, not careful journalism verification. When I edited business content for Director of Finance Online and SME Web, I often had mere minutes to turn around breaking stories on interest rate decisions or housing statistics for inclusion on Google News.

Despite preparing draft articles for every possible scenario, these pieces still required expert commentary to add credibility and context – creating perfect conditions for artificial expertise to flourish.

In those circumstances, the idea of thoroughly vetting each expert source becomes almost laughable. You’re frantically refreshing your inbox for quotes while simultaneously watching market movements, drafting headlines and praying your CMS doesn’t crash. It’s the perfect environment for AI fakery to flourish.

The journalist’s BS detector

But here’s the thing: experienced journalists develop something that AI can’t replicate: instinct. That gut feeling when something doesn’t quite ring true.

During my time as editor of SmartLandlord for Towergate Insurance, I regularly had to interpret how markets would react to breaking news. This wasn’t just about reporting facts; it required contextual intelligence built through years of watching how these stories typically unfold. You develop a nose for finding the truth behind the PR gloss.

Take the classic “senior executive departure” press release. Behind the boilerplate language about “pursuing other opportunities” or “spending more time with family” often lurks a much deeper story: poor performance, boardroom conflicts or strategic disagreements.

A good business journalist doesn’t just rewrite the press release; they read between the lines.

This developed instinct is precisely what helps spot AI-generated content. The same pattern recognition that tells you a CEO’s sudden departure “to pursue personal interests” might signal trouble ahead also flags when an “expert” comment feels oddly generic, technically correct but lacking lived experience.

Real experts have distinctive voices, specific blindspots, and occasional uncertainty. They provide unexpected insights based on their unique experiences.

AI-generated commentary, by contrast, often produces technically sound but strangely frictionless analysis – too comprehensive, too balanced, too perfect.

Calculated risks and community accountability

Part of what makes journalism valuable is the willingness to take calculated risks, to suggest connections or implications without spelling them out explicitly. This requires understanding the boundaries of acceptable speculation and libel law, something no AI has a framework for assessing.

When I wrote market analyses, I followed investor discussion boards to see how my takes were received. Was my interpretation seen as credible by market participants? This accountability loop helped refine my judgement over time. AI lacks this feedback mechanism for developing nuanced understanding.

The truth is that journalism has always been imperfect. But its imperfections – the selective focus, the calculated gambles, the distinctive voice – are precisely what make it valuable. AI-generated content strips away these human elements in favour of bland comprehensiveness.

The dangers of mistaken identity

The risks extend far beyond bland commentary. Consider this personal example: my husband shares exactly the same name – first and last – with a well-known Financial Times journalist. In a world of automated news generation, what happens if one is implicated in a financial scandal?

Would an AI system carefully distinguish between these namesakes, or would it make a catastrophic error based on which one has a higher online profile? The consequences of such a mistake could be professionally and personally ruinous.

Human journalists understand the critical importance of verification in such cases, confirming specific institutions, titles, timelines and contextual factors that make one person more likely to be involved than another. AI systems making probabilistic associations based on scraped information could easily conflate identities.

The innovation blind spot

Here’s another critical limitation: while AI excels at identifying patterns, this creates a profound blind spot when confronted with genuine innovation or fresh thinking.

When faced with a truly original expert perspective, AI systems would likely handle it in one of three problematic ways:

  1. Dismiss it as an anomaly: Since the novel insight doesn’t fit established patterns, the AI might flag it as statistically improbable or irrelevant, essentially treating breakthrough thinking as noise rather than signal.
  2. Dilute it to fit existing patterns: The AI might recognise elements of the insight but reframe them in more conventional terms, stripping away precisely what makes the perspective valuable.
  3. Misclassify it entirely: Without appropriate context, AI might incorrectly categorise genuinely innovative thinking as belonging to an unrelated domain or perspective.

This limitation is particularly dangerous in financial journalism, where the most valuable insights often come from experts who see patterns others don’t or who recognise when established patterns are about to break.

Consider how valuable it was when someone first identified subprime mortgages as a systemic risk before the 2008 crisis, a perspective that would have appeared statistically anomalous to any pattern-recognition system at the time.

The irony is that in trying to separate signal from noise, AI systems may systematically filter out the most important signals, those that don’t fit comfortably with existing knowledge.

This creates a genuine risk of intellectual stagnation if we over-rely on AI for expert validation, privileging consensus thinking and screening out the very experts whose divergent insights we most need to hear.

Preserving human judgement in an automated world

As newsrooms increasingly adopt AI tools, we must be clear-eyed about what’s at stake. There’s a place for automation in journalism: data processing, transcription, even first drafts of straightforward stories. But we must preserve the human expertise that makes journalism valuable.

This means:

  • Recognising that verification takes time, and adjusting expectations around publishing speed
  • Valuing and developing journalistic instinct, especially in early-career reporters
  • Creating clear attribution standards for AI-assisted content
  • Establishing robust verification processes for expert sources
  • Maintaining human oversight for sensitive stories involving reputation or legal risk

The Press Gazette investigation isn’t just about fake experts, but about what kind of journalism we want. As AI systems become more sophisticated, the irony is that genuine human expertise becomes more valuable, not less.

The developed instinct of seasoned journalists may be our best defence against a rising tide of artificial expertise.

Workplace Insights coach Adrie van der Luijt

Adrie van der Luijt is CEO of Trauma-Informed Content Consulting. Kristina Halvorson, CEO of Brain Traffic and Button Events, has praised his “outstanding work” on trauma-informed content and AI.

Adrie advises organisations on ethical content frameworks that acknowledge human vulnerability whilst upholding dignity. His work includes projects for the Cabinet Office, Cancer Research UK, the Metropolitan Police Service and Universal Credit.