Workplace Insights by Adrie van der Luijt

the AI con

How politicians sell Silicon Valley hype whilst public expertise crumbles

Scientists challenge von der Leyen's claim that AI will match human reasoning by 2026—exposing a dangerous pattern of leaders from Starmer to Brussels betting public services on marketing materials rather than evidence

When Ursula von der Leyen stood before the EU budget conference in May and declared that artificial intelligence would “approach human reasoning” by 2026 —  a feat previously expected around 2050 — she wasn’t citing peer-reviewed research. She was, it turns out, repeating marketing materials from tech company CEOs.

This week, more than 70 scientists — including two members of the UN’s high-level advisory body on AI — called her out in an open letter. After the scientists requested evidence for her claim, the European Commission disclosed it was based on “the professional knowledge of Commission services and desk review of scientific literature”. The “literature” turned out to be statements from Anthropic’s CEO, OpenAI’s chief executive, Nvidia’s head, and AI researcher Yoshua Bengio.

“These are marketing statements driven by profit-motive and ideology rather than empirical evidence and formal proof,” the scientists wrote. By amplifying Silicon Valley’s sales pitch, they argued, von der Leyen was undermining Europe’s credibility.

But this isn’t just a Brussels problem. From von der Leyen’s €200 billion AI investment programme to Sir Keir Starmer’s promises that artificial intelligence will revolutionise the NHS and transform public services, political leaders across Europe are making sweeping commitments based on industry hype rather than scientific reality. And whilst billions flow to private AI contractors, the actual human expertise that keeps public services functioning is being systematically dismantled.

The growing gap between hype and reality

The scientists who signed the letter to von der Leyen aren’t Luddites. They work with AI systems daily. Belgian AI researcher Luc Steels, UN advisors Abeba Birhane and Virginia Dignum and dozens of others understand the technology intimately. That’s precisely why they’re sounding the alarm.

Recent research reveals a widening chasm between what politicians claim AI can do and what the technology actually delivers. In June 2025, Apple researchers published findings titled “The Illusion of Thinking,” demonstrating that state-of-the-art reasoning models “still fail to develop generalizable problem-solving capabilities, with accuracy ultimately collapsing to zero beyond certain complexities”.

In other words: once problems get sufficiently complex, these supposedly revolutionary systems simply stop working.

Oxford researchers examining 445 AI benchmark tests found that many don’t actually measure what they claim to test. Models might be memorising patterns rather than reasoning, the difference between a first-grader reciting “two plus five equals seven” and genuine mathematical understanding. Studies from MIT, Apple and others consistently show that large language models rely on sophisticated pattern-matching, not genuine logic.

Perhaps most tellingly, scientists’ trust in AI is collapsing as they gain experience with it. According to research from academic publisher Wiley, scientists who believed AI surpassed human abilities in over 50% of use cases in 2024 saw that confidence crater to less than a third by 2025. Concerns about AI “hallucinations”, systems confidently presenting fabricated information as fact, jumped from 51% to 64% in a single year.

Even researchers at the companies building these systems are worried. Forty scientists from OpenAI, Google DeepMind, Anthropic, and Meta co-authored a paper warning they may be losing the ability to understand how advanced AI reasoning models actually work. The transparency that currently exists through “chain-of-thought” processes could vanish as models evolve, they cautioned, leaving developers in the dark about their own creations.

Thomas Wolf, co-founder of AI startup Hugging Face, put it bluntly: current AI models are unlikely to make Nobel Prize-level scientific breakthroughs. They’ll serve as co-pilots for researchers, not revolutionary thinkers.

Yet politicians keep citing CEO statements, from people with billions in quarterly earnings to defend, as if they were scientific consensus.

When leaders become salespeople

Ursula von der LeyenThe pattern is unmistakable. Von der Leyen has repeatedly promoted AI’s imminent leap to human-level reasoning whilst pushing through €200 billion in investments for “AI factories” and “gigafactories”.

At the February 2025 AI Action Summit in Paris, she declared the “AI race is far from over” and promised that Europe’s AI infrastructure would provide “massive computing power, exceeding 100,000 advanced AI processors”.

The money, she assured attendees, would come from both public coffers and private investment, “the world’s largest public-private partnership for the development of trustworthy AI”. Translation: public risk, private profit. 

Across the Channel, Keir Starmer’s government is singing from the same hymn sheet. Since taking office, Labour has positioned AI as the solution to virtually every public service challenge. The NHS will be transformed by artificial intelligence. Social care will be revolutionised. The civil service will be streamlined. Efficiency will soar.

The practical reality is that major AI contracts are handed to private firms whilst the NHS haemorrhages junior doctors, teachers burn out under impossible workloads, and social workers struggle with caseloads that make meaningful intervention impossible. The institutional expertise built over decades, the kind that can’t be replicated by pattern-matching algorithms, is being allowed to drain away whilst politicians chase Silicon Valley’s latest miracle cure.

This isn’t naive optimism. It’s ideological commitment dressed as innovation.

What’s actually being lost

Whilst billions flow towards AI procurement, actual human capability is being systematically undermined. The NHS is losing experienced clinicians faster than it can train replacements. Teaching has become unsustainable for many, with mid-career professionals leaving in droves. Social services operate in permanent crisis mode. The civil service, gutted by years of austerity, has lost institutional memory that took generations to accumulate.

These aren’t jobs that AI can simply assume. Medicine requires clinical judgement built over years of practice, the ability to spot subtle patterns in patient presentations, to weigh conflicting evidence and to communicate with frightened people facing life-changing diagnoses. Teaching demands the capacity to adapt to each child’s needs, to recognise when a student is struggling emotionally rather than academically and to inspire curiosity that transcends curriculum requirements.

Social work involves navigating complex family dynamics, making difficult decisions with incomplete information, advocating for vulnerable people within Byzantine bureaucratic systems. Civil service expertise means understanding how policies interact, anticipating unintended consequences, maintaining standards that outlast political cycles.

As researchers at Dutch universities noted in an open letter on AI adoption in academia, AI systems “can mimic the appearance of scholarly work, but they are (by construction) unconcerned with truth”. They produce output that sounds convincing but may be “accidentally true” at best, “confidently wrong” at worst.

This matters profoundly in public services. A GP needs to know not just what diagnosis fits the symptoms, but when standard protocols shouldn’t apply. A teacher needs to recognise that a child’s behavioural issues stem from abuse, not defiance. A social worker needs to advocate for a family even when the easy option is removal. None of these require processing power. They require human judgement, earned through experience and reflection.

Yet the political narrative presents a false choice: embrace AI transformation or accept decline. The actual choice being made is to starve public services of proper investment in human expertise whilst funnelling money to private contractors promising technological salvation.

Following the money

The revolving door between tech companies and policy roles isn’t a coincidence. AI firms employ armies of lobbyists. Former government officials land consultancy positions with the same companies they once regulated. The “desk review of scientific literature” that von der Leyen’s office conducted somehow concluded that CEO statements from companies with everything to gain constituted evidence.

When Starmer’s government awards NHS AI contracts, who benefits? Not the exhausted A&E staff dealing with corridor care. Not the patients waiting months for routine procedures. The beneficiaries are private firms selling systems that may or may not work as advertised, with public money funding the experiment.

The €200 billion that von der Leyen committed will largely flow to industry. Meanwhile, training programmes for doctors, nurses, teachers, and social workers, the humans who actually deliver public services, face continued constraints. Infrastructure crumbles. Pay fails to match the cost of living. Workloads become unmanageable.

This is policy by press release, governance by wishful thinking. It’s also enormously profitable for those selling the snake oil.

What the evidence actually demands

The scientists who signed the letter to von der Leyen aren’t arguing against AI development. They’re insisting that political leaders stop pretending it can do what it manifestly cannot. They’re demanding that public policy be based on evidence rather than marketing materials.

Proper investment in human expertise means competitive salaries that retain experienced professionals. It means training budgets that allow continuous development. It means workloads that permit reflection and learning rather than constant crisis management. It means institutional structures that preserve knowledge across political cycles.

It means recognising that the boring work of building human capacity, paying teachers properly, training sufficient doctors and maintaining civil service expertise is what actually makes public services function. There’s no shortcut, no technological hack, no AI miracle that obviates the need for properly resourced human professionals.

The technology has uses. AI can assist with pattern recognition in medical imaging. It can help process routine paperwork. It can flag anomalies for human review. As a tool supporting experienced professionals, it has genuine value.

As a replacement for human expertise? That’s the con. And political leaders know it or ought to. When von der Leyen’s office was pressed for evidence and produced CEO statements, the game was revealed. This isn’t about evidence-based policy. It’s about finding technological justification for the same old agenda: privatisation, outsourcing, and the continued erosion of public capability.

The credibility question

The scientists ended their letter with a pointed observation: “The scientific development of any potentially useful AI is not served by amplifying the unscientific marketing claims of US tech firms.”

They’re right. Europe has positioned itself as the serious regulatory counterweight to Silicon Valley’s move-fast-and-break-things approach. The EU AI Act was meant to establish guardrails. The emphasis on “trustworthy AI” was supposed to differentiate European development from American cowboy capitalism.

But when the Commission President parrots marketing materials and calls it scientific literature, that credibility evaporates. When she commits €200 billion based on claims that researchers working with the technology don’t believe, Europe becomes just another mark in Silicon Valley’s long con.

The same applies to Starmer’s government. Labour positioned itself as the party of evidence-based policy, of pragmatic governance, of fixing broken public services. Yet on AI, it’s repeating the same hype cycle: transformative technology, efficiency gains, revolutionary change. This all happens while the actual infrastructure of public service expertise continues to crumble.

The choice ahead

Political leaders face a genuine choice, though it’s not the one they’re presenting to the public.

They can continue down the current path: amplifying industry marketing, awarding contracts to private AI firms, allowing human expertise in public services to drain away whilst waiting for technological salvation that evidence suggests won’t arrive. This path leads to diminished capability, outsourced decision-making, and public services that look functional on paper whilst failing in practice.

Or they can do the unglamorous work: proper investment in training and retaining skilled professionals, infrastructure that works, pay that attracts and keeps expertise, institutional structures that preserve knowledge and capability. This path requires political courage because it means admitting there’s no shortcut, no hack, no AI miracle that obviates the need for properly funded public services.

The scientists calling out von der Leyen aren’t resisting progress. They’re resisting a con. They’re refusing to stay silent whilst political leaders bet public services on technology that can’t deliver what’s being promised, based on evidence that isn’t evidence at all.

The question is whether those leaders will listen to the people who actually understand the technology or keep taking their cues from those with billions of reasons to oversell it.

When the Commission admitted von der Leyen’s claim about AI approaching human reasoning was based on CEO statements rather than scientific research, it revealed the foundation of current AI policy across Europe: marketing materials, wishful thinking, and ideology wearing innovation’s clothes.

The scientists have called time on the con. Whether political leaders will acknowledge reality or double down on hype remains to be seen. But the cost of continuing to choose marketing over evidence won’t be borne by the CEOs making the promises; it’ll be paid by the public services that collapse whilst waiting for salvation that was never real.

Workplace Insights coach Adrie van der Luijt

Adrie van der Luijt is CEO of Trauma-Informed Content Consulting. Kristina Halvorson, CEO of Brain Traffic and Button Events, has praised his “outstanding work” on trauma-informed content and AI.

Adrie advises organisations on ethical content frameworks that acknowledge human vulnerability whilst upholding dignity. His work includes:

  • developing the UK’s national drink and needle spiking advice service used by 81% of police forces in England and Wales – praised by victim support organisations
  • creating user journeys for 5.6 million people claiming Universal Credit and pioneering government digital standards for transactional content on GOV.UK
  • restructuring thousands of pages of advice for Cancer Research UK‘s website, which serves four million visitors a month.