
Workplace Insights by Adrie van der Luijt
I woke up in a cold sweat last night after reading about North Korean operatives using deepfakes to infiltrate western companies.
My mind immediately went to the systems I’ve helped build over the years. What if someone used this technology to claim Universal Credit fraudulently? Could deepfakes compromise the identity verification in GOV.UK One Login? The Covid grant schemes I worked on at the Cabinet Office – were they vulnerable too?
After twenty cups of tea and several hours of research, I’ve come to a conclusion that might surprise you: we’re probably worrying about the wrong things.
Most deepfake stories making the rounds focus on job applicants using AI to misrepresent themselves in remote interviews. The Department of Justice in America has made arrests related to North Korean IT workers. KnowBe4, a cybersecurity firm, admitted hiring someone who used a deepfake profile photo and stolen identity.
These cases are concerning, but I’ve not found a single documented example of deepfakes being used to defraud UK government services. Not one. Which made me wonder why.
When I worked on Universal Credit, we spent countless hours designing verification processes that were necessarily mundane but remarkably effective. The system requires multiple proof points that deepfakes simply can’t circumvent.
For Universal Credit claims, you need a bank account, a verifiable address and usually an in-person appointment. You must provide documentation that can be cross-checked against other government databases. A convincing video call performance alone wouldn’t get you very far.
The Covid grant counter-fraud systems I helped develop at the Cabinet Office followed similar principles. We created tiered verification matched to risk levels. Higher-value grants required more stringent proof, often linking to existing tax records that would be extraordinarily difficult to fake with current technology.
This isn’t to say these systems are perfect – they absolutely aren’t – but their vulnerabilities lie elsewhere.
Having worked on digital services for both Universal Credit and the Metropolitan Police, I’ve seen how verification systems function in sensitive environments. The actual weak points rarely involve sophisticated technology.
With the Met Police’s online reporting systems, the greater concern isn’t someone using a deepfake to file a false report. It’s the ability to create convincing false evidence or to manipulate existing evidence. Imagine doctored dashcam footage in a traffic incident or fabricated screenshots in a harassment case.
For benefits systems, the more plausible threat comes from the authentication stage after identity is established. If credentials are compromised, a deepfake might help bypass additional security questions during account recovery. But this would be an enhancement to traditional fraud techniques, not a replacement.
GOV.UK One Login presents an interesting case study. As the government’s unified identity platform, it should theoretically be a prime target for deepfake government fraud. But its multi-layered approach to verification makes pure deepfake attacks impractical.
The system requires document verification, biometric matching and knowledge-based authentication. A deepfake might help with a video verification step, but it can’t produce a fake passport that passes digital verification or answer questions about your credit history.
What’s more concerning is how deepfakes might be used alongside traditional social engineering. Imagine a convincing deepfake video call from someone claiming to be from GOV.UK support, asking for your security details to “verify your account”. That’s where the real vulnerability lies.
The corporate cases of deepfake job candidates offer important lessons. What made these frauds successful wasn’t just the technology, but the complete package: stolen identities, doctored documents and convincing backstories.
The security firm that accidentally hired a North Korean operative did so because their verification relied too heavily on visual confirmation during video calls. They didn’t adequately cross-reference other identity markers or notice inconsistencies in the candidate’s background.
Government services, for all their faults, typically excel at this kind of cross-referencing. When I worked on Rural Payments, we didn’t just verify a farmer’s identity. We cross-checked land registry data, previous subsidy claims and other touchpoints that would be nearly impossible to fabricate consistently.
Rather than worrying about deepfakes infiltrating our existing verification systems, I believe the greater threat comes from three directions:
First, the use of deepfakes to obtain real credentials through social engineering. Why break the verification system when you can trick someone into giving you legitimate access?
Second, the combination of deepfakes with stolen identity information. As more personal data is compromised through breaches, convincing impersonation becomes easier.
Third, the indirect use of deepfakes to undermine trust in digital government. If citizens believe government systems are vulnerable to deepfakes, trust erodes, even without actual fraud taking place.
Having spent years in the trenches of government digital services, I believe our approach should be measured but proactive. We don’t need to redesign our verification systems from scratch, but we do need to enhance protection at specific vulnerability points.
For systems like Universal Credit, adding simple verification techniques, like asking users to perform specific actions during video verification, could provide protection against sophisticated deepfakes without creating undue barriers for legitimate users.
For GOV.UK One Login, focusing on strengthening the human elements through better training and awareness might yield better results than technological solutions alone.
And for all systems, clear logging and analytics to detect unusual patterns will catch most fraudulent activity, deepfake-assisted or otherwise.
The deepfake bogeyman makes for exciting headlines, but the reality of government service security is both more mundane and more robust than these stories suggest. The real vulnerabilities lie not in the technology itself, but in how we implement and support it.
After all, as someone who’s designed verification workflows for everything from farmer subsidies to police reporting systems, I’ve learned one enduring truth: most security breaches don’t come from exotic technology. They come from human error, rushed implementation, or overlooked details in boring, everyday processes.
And that’s something no deepfake can fix.
Adrie van der Luijt is CEO of Trauma-Informed Content Consulting. Kristina Halvorson, CEO of Brain Traffic and Button Events, has praised his “outstanding work” on trauma-informed content and AI.
Adrie advises organisations on ethical content frameworks that acknowledge human vulnerability whilst upholding dignity. His work includes projects for the Cabinet Office, Cancer Research UK, the Metropolitan Police Service and Universal Credit.