
Workplace Insights by Adrie van der Luijt
There’s something genuinely alarming happening in organisations using AI that goes beyond job displacement. It’s happening, for example, in the civil service: the creation of an expertise vacuum where humans can no longer effectively evaluate or challenge the outputs they’re receiving.
The current UK government is particularly keen to implement AI tools to transform public services and indeed the civil service itself. AI can research, summarise, brief and draft reports and policy documents.
But who is left to verify the veracity of all that AI-generated output once the mid-tier jobs of those who traditionally did the checking have been replaced?
I spoke with a former civil servant last month who described this exact situation. “We’ve started using AI systems to analyse policy impacts and generate briefing papers,” she told me. “The problem is that we’ve simultaneously reduced the number of specialists who truly understand these domains. We’re increasingly in a position where nobody in the room has the depth of knowledge to properly question what the system is telling us.”
This isn’t unique to government. In financial services, algorithmic trading systems make decisions at speeds and complexities beyond human comprehension. In healthcare, diagnostic systems identify patterns in medical images that human physicians may not recognise.
In countless organisations, AI systems generate analysis and recommendations that the remaining human employees lack the specialist knowledge to properly evaluate.
The danger here is profound. When organisations hollow out their expertise in the pursuit of efficiency, they create a dangerous dependency. The knowledge required to judge whether an AI system is functioning properly, has appropriate parameters or is generating valid conclusions gradually disappears from the organisation.
What makes this particularly concerning is how it interacts with existing power structures. Those who control these systems -whether tech companies, senior management, or government leaders – gain unprecedented authority when others lack the expertise to challenge their outputs.
“Computer says no” becomes an extraordinarily powerful argument when nobody remains who can confidently explain why the computer might be wrong.
I witnessed this first-hand in a meeting with a marketing team where an AI system had generated customer segmentation analysis. The results contradicted years of the team’s experience with their market, but nobody felt confident challenging the system’s conclusions.
They lacked the statistical expertise to articulate precisely why the analysis might be flawed. The human knowledge had been devalued to the point where it couldn’t effectively push back.
This expertise vacuum creates risks that go far beyond job losses. It threatens to undermine the very foundations of reasoned decision-making within organisations and society more broadly. When we lose the capacity to evaluate information independently of the systems generating it, we surrender a fundamental form of agency.
For professionals navigating this shift, the implications are sobering. The value of deep specialist knowledge – not just in operating AI systems but in independently understanding the domains they address – may become increasingly precious even as mid-tier execution roles disappear.
The ability to effectively challenge AI outputs from a position of genuine expertise could become one of the most valuable skills in numerous professions.
Organisations facing this challenge need to consider not just efficiency gains but knowledge preservation. The institutional expertise built over decades doesn’t exist solely in databases and documentation. It lives in the accumulated wisdom and judgment of experienced professionals. When those roles disappear, that knowledge often vanishes with them.
What’s needed is not blind resistance to AI adoption but a thoughtful approach that preserves human expertise alongside technological capability. This might mean redefining roles to emphasise oversight and evaluation rather than execution or creating specific positions focused on validating and challenging system outputs.
For individuals, this suggests that developing genuine domain expertise – the kind that allows you to confidently evaluate information regardless of its source – may become more valuable than ever.
The future may belong not to those who simply use AI tools effectively, but to those who retain the independent judgement to know when those tools are wrong.
The expertise vacuum we’ve identified represents perhaps the most profound risk in our current technological transition. If we lose the human capacity to effectively oversee the systems we create, we risk surrendering not just jobs but the very foundations of informed decision-making across society.