
Workplace Insights by Adrie van der Luijt
The UK government just published comprehensive research on AI skills. It is mainly making headlines – and generating ridicule on social media – for the government’s announcement that everyone will be offered 20-minute training in how to write basic AI prompts – an initiative it bizarrely likened to the launch of the Open University.
This research itself involved multiple work packages, a review of 139 pieces of literature, surveys of 1,189 people and 801 employers, and workshops with experts and the public. In other words, serious money and a serious effort.
Their findings are actually useful. 73% of people think they’ve used AI in the past month, but only 17% can explain what it is. Two-thirds of Brits distrust AI products and services. Women are less confident and less likely to have had training. Business leaders struggle to identify viable AI use cases but are deploying systems anyway.
Their conclusion is that workers need more AI skills training to build trust and confidence.
In my view, this is fundamentally wrong about the real problem.
Before we even get to the rest of the findings, there’s a telling detail. The government published this research on a new AI skills site that should never have gone live.
Sarah Winters, founder of Content Design London, documented the accessibility failures in their own accessibility statement: inconsistent semantic structure, pop-up elements that trap keyboard users, fixed headers obscuring content, input controls that don’t work with keyboards, comment boxes that trap focus and poor colour contrast.
“That’s not the half of it,” she wrote.
So they’ve published research about preventing AI from exacerbating existing inequalities on a site that excludes disabled people. They’re worried about future skills gaps while creating present exclusion.
This isn’t irony, but the pattern I’ve watched for 40 years: organisations don’t check if their own systems work before deploying them. Then they research why people can’t use them. Then they conclude that the users lack skills.
Anyway, the research is certainly comprehensive. It uses mixed methods, decent sample sizes and multiple work packages. They’ve clearly spent money and time on this. The findings on “skills for understanding” versus “skills for use” are genuinely useful and that distinction matters.
They’ve noticed that trust is the real problem. The “iceberg effect” analysis, in which surface concerns about misinformation sit atop deeper fears and powerlessness, is a solid insight.
The finding that employers are training for “skills for use” while neglecting “skills for understanding” is important. Especially when they note that judging accuracy and reliability is the skill employees think is important, but employers aren’t providing training for. That’s the opposite of what you’d want if you actually cared about outcomes.
They’ve spotted regional concentration, with 60% of AI expert vacancies in London and the South East. They mention that women are less confident and less likely to have had training. These are real patterns.
But where it falls apart is that the entire premise treats AI adoption as inevitable and neutral. Look at this language throughout: “as AI becomes increasingly embedded”, “prepare for projected growth” and “keep pace with rapid evolution”. That is AI marketing talk, not objective research.
It reminds me of an open letter sent by expert AI scientists sent to Ursula von der Leyen, President of the European Commission, last November. They criticised her speech on AI as “marketing statements driven by profit-motive and ideology rather than empirical evidence and formal proof”.
Not once in this latest UK government research do they ask whether we should deploy these systems. Do they actually work? What harm are they causing?
This is the classic move of parking moral questions in technology projects. Instead of asking “Is this good?”, they ask “How do we skill up for this?”. The workers become the problem – they have a “skills gap” – rather than asking whether the business decisions to deploy half-working AI systems serve anyone except quarterly reports.
They’ve found that people don’t trust AI. Two-thirds of Brits are nervous about AI products and services. People reference “terminator outcomes”.
Their conclusion is that people need more education about AI to build trust.
This is arse-backwards. Pardon my language. The distrust isn’t a deficit. It’s a rational response to observable harm. When people see AI chatbots hallucinating medical advice, when they see biased hiring tools, when they see surveillance systems that don’t work but get deployed anyway, that’s not a failure to understand AI. That’s understanding it perfectly well. Trust is absolutely central to my work, both in government digital and previously as an editor. You can’t blag your way to building trust by telling people not to trust their own eyes and ears. To do so, is not sensible nudging but rather kowtowing to AI providers.
My micro-trauma framework also applies here. People aren’t arriving at AI systems fresh and eager. They’re arriving exhausted, disillusioned and less focused than they were before the pandemic. They’re being told they need to develop “skills for understanding” to navigate systems that organisations have chosen to deploy without first checking whether they work.
The content isn’t working harder to be understood; it’s adding another layer of burden.
The entire framing is that workers lack skills, not that organisations are making bad deployment decisions. Look at this: “56% of employers whose businesses are currently using or planning to use AI rate the level of knowledge in their business overall as ‘beginner’ or ‘novice'”.
Right, so you’re deploying systems your workforce doesn’t understand. That’s not a training problem, but a governance problem. That’s executives making decisions without involving the people who’ll have to live with the consequences.
They note that business leaders “face difficulties identifying viable AI use cases”. Brilliant. So we’re spending millions on skills training for technology that even business leaders can’t identify good uses for. That’s not a skills gap, but organisations deploying AI because they think they should, not because they’ve identified actual problems it solves.
They mention this briefly but don’t connect the dots. AI skills gaps will exacerbate the digital divide, which is fundamentally about class and poverty, not age. The report treats this as an unfortunate side effect to manage, not as a predictable outcome of the choices being made.
Look at the regional concentration: 60% of AI jobs in London and the South East. They note “emerging clusters” in Cambridge, Bristol, Oxford, Manchester, Reading – all existing tech hubs. This isn’t organic growth, but deliberate economic choices creating structural exclusion.
Then they ask: “How can the UK use an AI skills agenda to close, rather than exacerbate existing gaps?”
Well, you can’t. Not when the premise is that everyone needs to skill up for systems that concentrate power and money in places that already have both. The inequality is a feature, not a bug.
Here’s what they don’t discuss:
Outcomes. Do these AI systems actually work? What’s the demonstrated value beyond “efficiency”? They mention AI in healthcare, but don’t ask whether AI diagnostic tools are better than human diagnosticians. What’s the error rate? Who’s liable when they’re wrong?
Displacement. They project 3.9 million people in roles involving “core AI activities” by 2035. But they treat this as a job transformation, not a job loss. Where’s the analysis of who loses work? Where’s the discussion of what happens to people whose jobs get automated away?
This isn’t paranoia. I’ve watched it happen. The Met Police work I did on trauma-informed content was adopted by 81% of UK forces. That’s work done by humans understanding humans. What happens when that gets “optimised” by AI that can’t understand trauma?
Power. Who decides which AI systems get deployed? Who benefits? Workers aren’t consulted. Business leaders “struggle to communicate decisions” to staff. That’s not a skills gap, but a power gap.
And the perfect illustration of that power gap is that while this research was being conducted, the UK government was quietly embedding Palantir – Peter Thiel’s surveillance firm and a Trump ally – into critical national infrastructure. Research published by The Nerve this week reveals at least 34 government contracts worth over £672m, including £15m with the nuclear weapons agency that manages Britain’s nuclear deterrent.
The company’s CEO, Alex Karp, has said Palantir exists “to scare enemies and on occasion kill them”. Its executive vice-president for the UK is Louis Mosley, grandson of British fascist leader Oswald Mosley.
So while ordinary workers are being told they lack the skills to trust AI, the government is handing management of nuclear weapons to a firm run by a far-right billionaire who says that he “no longer” thinks “that freedom and democracy are compatible”, overseen by someone who speaks at conferences alongside Nigel Farage and Jordan Peterson.
MPs are calling this a “gaping national security vulnerability”. But there’s no discussion of worker consultation, no democratic oversight, no assessment of whether this serves the public good. Just contracts signed without tender, shepherded through by people with financial interests in the company.
That’s what happens when the question is “how do we skill up workers” rather than “who should control these systems and for what purpose”.
Actual harm. There is no discussion of people who’ve already been harmed by AI systems. No case studies of benefits denied, applications rejected or surveillance deployed. The focus on “future skills” ignores present harms.
I can tell you about present harms. I’ve seen what happens when systems are deployed without first checking whether they work. People lose benefits they’re entitled to. People can’t get help when they’re in crisis. People give up because the system is too hard to navigate. That’s not a skills gap. It’s harm.
Alternatives. Never once do they ask: what if we didn’t deploy these systems? What if we invested in improving the services that already exist? What if we focused on making content and services work for the people using them now, rather than adding AI layers?
This research involved multiple work packages, thousands of survey respondents, expert panels and stakeholder workshops. That’s serious money. Money that could have gone to:
Instead, we’re spending it on solving a problem – “AI skills gaps” – that exists because organisations are choosing to deploy systems without checking if they work or if anyone wants them.
The really infuriating bit is that they’re creating demand for AI skills training, which creates jobs in AI skills training, which requires people to pay for AI skills training (or organisations to pay for it), which creates more demand for AI skills, which… You see where this goes.
This is exactly the pattern I’ve identified: AI deployed specifically to avoid human judgement that would catch failures. Then, AI skills training is deployed to paper over the fact that the systems don’t work. Then, more AI is needed to “fix” the problems created by the first AI.
The workers aren’t the problem. The workers ARE the economy. But this report treats them as inputs to optimise, rather than people whose lives and livelihoods matter.
Instead of “how can we skill up for AI”, they should have asked:
Do these systems work? Before deploying AI, demonstrate that it performs better than existing approaches. Not “efficiency”, but actual outcomes for actual humans.
Who benefits? Not shareholders, but users. If the primary beneficiary is cost reduction rather than service improvement, don’t deploy it.
Who’s harmed? Identify potential harms before deployment, not after. Include workers who’ll be displaced, users who’ll be excluded and communities that’ll be surveilled.
What’s the alternative? Could we achieve better outcomes by improving existing services? By hiring more workers? By making content work better?
Who decides? Workers and users should have meaningful input into deployment decisions, not just be told about them afterwards.
This reminds me exactly of the FCA Consumer Duty work I do. The Duty requires demonstrable understanding, not just technical compliance. It requires you to show that content works for users, not just that it exists.
This report is the opposite. It assumes AI deployment and asks how to train people for it. That’s compliance theatre. The real question is: are these systems fit for purpose? Do they meet user needs? Can people actually understand and use them?
The answer, based on their own findings, is no. People don’t trust them. People can’t explain them. People feel powerless and fearful. That’s not a skills gap, but a deployment gap.
If you’re citing this research, cite the findings but challenge the premise. The data on trust, on regional inequality, and on the gap between employer and employee confidence are all useful.
But refuse the frame that this is about skilling up workers. The frame should be: why are we deploying systems that don’t work for the people using them? Why are we treating workers as the problem rather than as the solution?
The research is comprehensive, but it’s wrong about the real problem. The problem isn’t that people lack AI skills. The problem is that organisations are deploying AI without regard for whether it works, who it harms or what alternatives might serve people better.
That’s not a skills gap, but a governance failure, a moral failure and a failure of imagination. And no amount of training will fix it.

Adrie van der Luijt is CEO of Trauma-Informed Content Consulting. Kristina Halvorson, CEO of Brain Traffic and Button Events, has praised his “outstanding work” on trauma-informed content and AI.
Adrie advises organisations on ethical content frameworks that acknowledge human vulnerability whilst upholding dignity. His work includes: