
Workplace Insights by Adrie van der Luijt
Last week, I watched a colleague demonstrate how ChatGPT could instantly create alt text for images. “Look,” they said, “AI and accessibility problems solved!” I bit my tongue, thinking of the countless hours I’d spent on government digital services struggling to create truly inclusive systems. If only the relationship between AI and accessibility were that simple.
As someone who has witnessed four decades of digital evolution (from green-screen terminals to cloud computing, from hand-coded HTML to generative AI), I’ve seen how each technological wave brings both new possibilities and new barriers. The current AI and accessibility revolution is no different, except perhaps in the breathtaking speed of its deployment and the concentration of its control in remarkably few hands.
The accessibility implications of this AI shift demand our urgent attention. Because when we talk about digital autonomy – the ability to access, understand and critically evaluate information – we must remember that for many users, basic access remains the first hurdle.
The potential benefits of AI and accessibility innovations are genuinely exciting. Voice interfaces could revolutionise computing for people with mobility impairments. Image recognition might describe visual content for blind users. Language models could simplify complex text for those with cognitive disabilities or translate content for non-native speakers.
I’ve spent years crafting government forms where every word choice could determine whether vulnerable people received essential benefits. The prospect of AI and accessibility working together to personalise interfaces dynamically to individual needs represents a quantum leap from the rigid systems I helped pioneer.
But the current implementation falls woefully short of this promise. Voice interfaces struggle with non-standard speech patterns, accents and speech impediments. Image recognition systems produce wildly inconsistent alt text. And language models often generate content that sounds fluent but lacks logical coherence, creating particular challenges for users with cognitive disabilities who may not recognise these subtle inconsistencies.
Perhaps most troublingly, accessibility features are frequently locked behind premium paywalls. When I worked on Universal Credit, we designed systems knowing they would be used by people in desperate financial circumstances. Today’s emerging two-tier AI landscape, with basic models free but enhanced capabilities requiring subscription, threatens to create new economic barriers to essential information and services.
The concentration of AI development in the hands of a few American tech companies has profound implications for global accessibility. These companies primarily train their models on US English-language content created by and for predominantly young, educated, Western users. The resulting systems work reasonably well for people who resemble those in the training data but can fail spectacularly for others.
I worked on my first government digital project in 1987. In those early days before the internet, I remember how Dutch government efforts to replicate the French Minitel system struggled with basic accessibility concerns. Today, we face similar challenges on a vastly larger scale. When four or five companies effectively control the interfaces through which billions access information, their design decisions – and omissions – matter enormously.
I’ve seen the consequences of centralised control during my work pioneering content design standards for the UK Government Digital Service. Even with explicit accessibility mandates, the needs of non-standard users were often treated as edge cases rather than core requirements. In today’s AI landscape, with far less regulatory oversight, the situation is considerably worse.
Perhaps the most dangerous accessibility myth surrounding AI is that it can independently verify its own accessibility. Having led digital teams through countless Government Digital Service assessments, I can state unequivocally: technology alone cannot determine if a system is truly accessible.
AI might flag missing alt text or suggest simpler language, but it cannot tell you if a screen reader user can actually complete a task or whether your content makes sense to someone with cognitive disabilities. These assessments require human testing with diverse users.
When I developed transactional content guidelines for Universal Credit, we discovered countless edge cases that no algorithm could have anticipated. A blind user might navigate your form perfectly but get stuck on a specific interaction. A dyslexic user might understand most content but struggle with particular terminology.
Organisations rushing to implement AI-generated content still need conventional accessibility verification methods. Screen reader testing, keyboard navigation checks, color contrast analysis, and – most importantly – sessions with actual users with disabilities remain essential. No GDS assessment would ever be passed based solely on an AI’s assurance of accessibility compliance.
So how do we harness AI’s potential while avoiding its accessibility pitfalls? Based on my experience spanning from early digital systems to government accessibility standards, I recommend several approaches:
True digital autonomy – the ability to access, understand and critically evaluate information – is impossible without accessibility. As AI increasingly mediates our information landscape, ensuring AI and accessibility work together becomes a fundamental equity issue.
I’ve witnessed each wave of digital evolution bring new accessibility challenges, from early websites requiring HTML knowledge to complex content management systems. Each time, thoughtful design and inclusive testing eventually improved access. But the AI and accessibility landscape’s rapid evolution and concentration in few hands make this cycle potentially more damaging.
As professionals implementing or advising on AI systems, we have both an opportunity and responsibility to ensure the relationship between AI and accessibility narrows rather than widens existing digital divides. Otherwise, we risk creating a world where digital autonomy becomes a privilege for some rather than a right for all.
Because if there’s one thing four decades in digital transformation has taught me, it’s this: technology that doesn’t work for everyone ultimately fails us all.
Adrie van der Luijt is CEO of Trauma-Informed Content Consulting. Kristina Halvorson, CEO of Brain Traffic and Button Events, has praised his “outstanding work” on trauma-informed content and AI.
Adrie advises organisations on ethical content frameworks that acknowledge human vulnerability whilst upholding dignity. His work includes projects for the Cabinet Office, Cancer Research UK, the Metropolitan Police Service and Universal Credit.