Workplace Insights by Adrie van der Luijt

Government digital accountability and FCA rules

Why SMCR principles would force senior managers to prove content works for vulnerable users

Personal accountability for senior managers would transform government digital by making someone specific answerable when content fails vulnerable users.

Every time I work on a government digital project, I’m struck by how comfortable everyone is with accountability being nowhere and everywhere simultaneously. There are governance boards and steering committees – gold groups even, in the Met Police – and senior responsible officers and delivery leads, all of whom have some vague responsibility for quality or user experience or accessibility. And when content fails vulnerable users, which it does with depressing regularity, the accountability diffuses into the organisational ether like smoke.

I’ve been thinking about what would happen if government digital projects had to follow the Senior Managers and Certification Regime that financial services firms live under. SMCR is the Financial Conduct Authority’s (FCA) framework for pinning accountability on named individuals. Not teams, not departments, not programmes. Specific human beings with their names attached to specific responsibilities.

I have previously written about how odd it is that government digital projects don’t have to follow an equivalent of the FCA’s Consumer Duty. This is another, yet equally compelling example. The basic mechanism is simple and brutal: senior managers get allocated clear, written responsibilities. When something goes wrong in their area, they’re personally accountable. They have to prove they took reasonable steps to prevent the failure. It’s guilty until proven innocent and it means you can’t hide behind process or governance structures or the claim that everyone’s responsible so no one really is.

What would this look like in government digital? And more importantly, would it work?

The current state of accountability

Let me tell you what accountability looks like right now. I worked on Rural Payments content, back when it was clear that the guidance was creating barriers for people with cognitive disabilities and reduced digital literacy. Everyone involved knew it. The research showed it. Users were telling us directly through support channels. But the content stayed as it was because changing it meant navigating policy constraints and stakeholder concerns and delivery timelines, and no one specific person had to stand up and explain why they thought it was acceptable to publish content that demonstrably didn’t work for people in acute financial crisis.

There were people who cared deeply. Content designers and researchers who pushed back, who documented the problems, who made the case for change. But the system isn’t set up to reward that kind of advocacy, and it certainly isn’t set up to penalise the absence of it. When you can point to having followed the process, ticked the governance boxes and consulted the right stakeholders, you’re covered. Whether the content actually works for real humans in their actual circumstances becomes almost beside the point.

I’ve seen the same pattern at a different scale and velocity while working on other government digital projects. Content needed to be published fast as a minimum viable product (MVP) against tight timeline and budget restrictions, which is fine and often necessary. But the lack of personal accountability meant that, when content failed people with cognitive disabilities or language barriers or who were simply terrified and not processing information well, there was no mechanism to make that anyone’s specific problem to solve immediately. The content might get reviewed in the next iteration and flagged for improvement, but the person currently being harmed by it wasn’t any one named individual’s responsibility.

What SMCR would actually mean

Under SMCR, every piece of published content would have to be attributed to someone. Not the delivery team or the content function, but Sarah Jones, who certified that this guidance meets minimum standards for people with reduced literacy and cognitive capacity. Michael, who approved this service description as accessible to users in crisis. Specific people making specific certifications about specific outcomes.

The named senior manager responsible for content accessibility would have to demonstrate what reasonable steps they took to prevent failures. Did you test with people who have aphasia? Show me the research reports. What changed as a result of that testing? Document your decisions. You can’t say you considered accessibility and leave it at that. You have to prove it, with evidence, with documented reasoning, with a clear trail of decisions and their rationale.

And if content repeatedly fails vulnerable users, there are personal consequences. You can’t hold similar roles in other departments. Your professional reputation is attached to your track record. Not the programme’s track record, not the department’s, yours.

But what about Agile teams?

This is where people start getting uncomfortable and I understand why. Modern government digital delivery is built on Agile principles, multidisciplinary teams and collective ownership. The whole point is that we don’t work in silos with individual specialists controlling their bit. We work together, we make decisions collectively and we share responsibility for outcomes.

So doesn’t SMCR contradict everything we’ve built around collaborative team delivery?

No. It clarifies it.

Working in Agile teams doesn’t mean no one’s accountable. It means the team is accountable for delivery, but someone specific is accountable for ensuring the team has what it needs to deliver well and that the outputs meet defined standards before they reach users. Those are different kinds of accountability and conflating them is how we end up with the current situation where everyone’s theoretically responsible but no one’s actually answerable.

Think about it this way. An Agile team might include content designers, developers, researchers, product managers and delivery leads. They work together to create and iterate content. That’s collective delivery responsibility and it works. But someone specific still needs to be accountable for certifying that the content meets accessibility standards for vulnerable users before it goes live. Someone specific needs to ensure the team has access to users with cognitive disabilities for testing. Someone specific needs to approve the decision to publish despite known limitations and document why that’s acceptable.

In financial services, Agile teams deliver products all the time. But there’s still a named senior manager who’s accountable for ensuring those products meet regulatory requirements and don’t harm vulnerable customers. The team collaborates on delivery. The senior manager is answerable for outcomes and for proving reasonable steps were taken.

The Agile objection is often really about protecting collaborative working culture from hierarchical accountability. But those things aren’t opposed. You can have flat, collaborative teams and still have named individuals who are ultimately answerable for whether the work meets defined standards. In fact, that clarity of accountability often helps teams because it means someone with actual power has to advocate for the resources and time they need to do quality work.

The objections that will come up

Let me anticipate what people will say, because I’ve been in enough of these conversations to know the patterns.

First: this will slow everything down. Having to document decisions and prove reasonable steps will add bureaucracy and make Agile delivery impossible.

Except it won’t, not really. What slows delivery down is rework because content failed users and needs fixing. What slows delivery down is endless stakeholder consultation trying to achieve consensus when no one’s actually accountable for the decision. What slows delivery down is publishing content you know doesn’t work, dealing with the support costs, and then eventually fixing it anyway. Personal accountability actually speeds things up because it forces clear decision-making and proper documentation of the decisions you’re making anyway.

Second: this will make good people leave. Senior practitioners won’t want to take on the personal risk and will move to safer environments.

This assumes that the current system attracts and retains good people, which is debatable. In my experience, the best content people are desperate for someone with actual authority to care about whether content works for vulnerable users. They’re exhausted from making the case for quality and being told it’s not a priority or there’s no time or the policy team won’t allow it. Personal accountability means someone senior has skin in the game, which makes it easier for practitioners to do good work, not harder.

Third: government is different from financial services. We can’t have the same regulatory approach.

The government can absolutely use the same regulatory approach for the services it provides to citizens. We’re not talking about regulating government as an entity. We’re talking about government holding itself to the same standards of accountability it requires from regulated industries. If financial services firms have to prove they’re not harming vulnerable customers, government digital services should have to prove the same thing. The power imbalance is actually greater with government services because people often have no choice but to use them.

Fourth: this assumes there are clear standards to measure against, and in content there aren’t.

There are loads of standards. GDS service standard. WCAG guidelines. Plain English guidance. You can argue about whether they’re sufficient, but the bigger problem is that even meeting the standards we have isn’t consistently enforced. And for vulnerable users, we actually have clear guidance now from Consumer Duty principles that can be adapted. Content should work for people with reduced capacity. Information should be front-loaded. Language should be appropriate to the user’s emotional state. These aren’t mysterious or subjective requirements.

Fifth: this is just blame culture dressed up in regulatory language.

This is the opposite of blame culture. Blame culture is what we have now, where failure gets diffused across the system, and no one is specifically answerable, so the people who get blamed are usually practitioners without decision-making power. Personal accountability for senior managers means the people with authority to change things are the ones who have to explain why they didn’t. It protects practitioners by making it clear who is responsible.

What would actually change

If government digital worked under SMCR principles, the difference wouldn’t be whether testing with vulnerable users happens. GDS already expects that as part of assessment preparation. The difference would be someone specific being personally accountable for whether that testing was rigorous enough, whether it included the right vulnerable users for the specific content and what happened with the findings. Right now you can test, document the problems users had and still publish the content if policy or timelines or stakeholder concerns override the research. Under SMCR, the named senior manager would have to explain why they believed it was reasonable to proceed despite knowing the content would fail certain vulnerable users. That’s a completely different kind of accountability.

Design decisions are already documented for GDS assessments at every stage, so that’s not new. What would be different is that someone named would be personally accountable for those decisions when content fails vulnerable users. Right now, you document decisions to show you followed the process and met the service standard. Under SMCR, you’d document them to prove you took reasonable steps to prevent harm to people with reduced capacity. Those are completely different purposes. One is about process compliance, the other is about outcome accountability. When content fails someone, you’d need to show not just that you followed best practice, but that you specifically considered and tried to prevent that harm happening to that type of user.

You couldn’t hide behind policy constraints to publish impossible content. If the policy says something has to be communicated in a way that’s demonstrably harmful to vulnerable users, the senior manager accountable for content accessibility would have to document that they flagged this, proposed alternatives and were overruled. That documentation creates institutional memory and makes the dysfunction visible in ways that drive change.

Bad content would get fixed faster because someone would be personally accountable for the ongoing harm. Not in six months when the next sprint cycle allows for it, but immediately, because your name’s on it and the longer it stays broken, the harder it is to argue you took reasonable steps.

Why this matters for content practitioners

I know this sounds like more management oversight, more governance, more requirements that make your work harder. But actually, personal accountability for senior managers is one of the best things that could happen for content practitioners.

Right now, when you push for proper user research with vulnerable users, you’re often told there’s no time or budget. When you flag that content will fail people with cognitive disabilities, you’re often overruled by policy or stakeholder concerns. When you document accessibility issues, they go into a backlog that never gets prioritised.

Under SMCR, the senior manager accountable for content accessibility has to prove they took reasonable steps. That means when you say the content needs testing with people with aphasia, they can’t just dismiss it as nice to have. They need to either provide the resource for proper testing or document why they believe it’s reasonable to proceed without it. Your professional judgment becomes evidence in their accountability, which means it has to be taken seriously.

This doesn’t mean every recommendation gets implemented. It means every recommendation has to be properly considered and decisions have to be documented with clear reasoning. That’s a completely different dynamic from the current situation, where recommendations often disappear into governance structures and no one’s answerable for whether they were acted on.

Would it make projects more risk-averse?

Yes, applying SMCR rules to government digital projects would probably make the people involved more risk-averse. But we need to be honest about what kind of risk-aversion we’re talking about.

Right now, government digital takes massive risks with vulnerable users all the time. We just don’t call them risks because the consequences land on the people using the services rather than on the institution. Publishing content without testing it properly with people who have cognitive disabilities is incredibly risky behaviour, but it’s normalised because no one specific faces consequences when it goes wrong.

SMCR would shift what gets treated as risky. Under personal accountability, launching content you haven’t adequately tested becomes the risky choice. Right now, taking time to test properly is seen as the risky choice because it might delay delivery or require difficult conversations with stakeholders.

So yes, senior managers would become more cautious about publishing content they couldn’t prove they’d made reasonable efforts to make accessible. Is that bad? Not if you’re the person with cognitive disabilities who the content is about to harm.

The real question is whether this appropriate caution around user harm would spill over into inappropriate caution around innovation and experimentation. And honestly, it might. Some senior managers might become defensive decision makers who won’t approve anything without exhaustive documentation and multiple sign-offs. That’s a genuine risk.

But I’ve watched government digital projects where the opposite happens. Where teams move incredibly fast and break things. The things that get broken are vulnerable people’s ability to access services they desperately need. I worked on projects where everyone knew the content wasn’t working for people with reduced capacity and we published it anyway because timelines or policy constraints or stakeholder concerns took precedence. That’s not bold innovation, that’s institutional recklessness that we’ve decided to call agility.

Financial services operate under SMCR and they still innovate. They still move fast when they need to. They just have to document their thinking and prove they considered the risks to vulnerable customers. The innovation that gets slowed down is the kind that treats user harm as acceptable collateral damage.

The uncomfortable bit is that some of what government digital currently calls innovation would look different under SMCR. You couldn’t launch a minimum viable product that you know will fail people with cognitive disabilities and justify it by saying you’ll iterate based on feedback. Because the feedback you’re waiting for is evidence of harm. Under SMCR, someone would have to explain why they thought that causing that harm was reasonable when testing could have prevented it.

There’s also a question about crisis response, which is where this gets genuinely complicated. When you need to publish urgent public health guidance during a pandemic, personal accountability for every decision could absolutely slow things down. But even in crisis situations, you’re making choices about whose safety matters. Publishing fast but inaccessible content means moving quickly for some users while creating barriers for others. SMCR would force you to document that trade-off and explain why you thought it was reasonable, rather than treating accessibility as something you’ll get to later.

The risk-aversion argument often assumes the status quo is appropriately balanced, when actually we’re just comfortable with who currently bears the risk. SMCR would redistribute that risk from vulnerable users to senior managers, which would feel like excessive caution if you’re used to the risk being externalised.

What I think would actually happen is more nuanced. Some senior managers would become defensive and slow. Others would become better at their jobs because they’d have to genuinely understand the risks their decisions create. Teams would get better at building testing and documentation into their process from the start, rather than treating it as a nice-to-have. Decision-making might be slower initially but clearer, because someone with actual authority would be required to make the call, rather than decisions happening through endless consultation trying to achieve consensus when no one’s actually accountable.

The question isn’t really whether SMCR would make government digital more risk-averse. It’s whether we want the people with power to make decisions to face personal consequences when their decisions harm vulnerable users. If the answer is yes, then some increased caution from those people is a feature, not a bug.

What we’d need to guard against is that caution paralysing teams or creating defensive documentation culture where everything gets recorded to provide cover but nothing improves. That’s a genuine risk. But it’s not inevitable, and it’s probably preferable to the current situation, where we’re taking enormous risks with vulnerable users and treating it as normal because the consequences are invisible to the institution.

The real reason this would work

SMCR in government digital would work because it aligns accountability with power. Right now, the people with the power to make resourcing decisions and override policy constraints aren’t personally answerable for whether content harms vulnerable users. The people who are closest to the harm, content practitioners and researchers, don’t have the authority to fix it.

Personal accountability means the people with power have to care about outcomes, not just process. They have to prove they took reasonable steps, which means actually taking them. And crucially, they can’t blame users for not coping with bad content. That’s literally the accountability they’re signing up for: ensuring content works for people as they actually are, not as we wish they were.

This is institutional accountability in practice. It’s the principle I built my work around: organisations being responsible for the harm their systems create rather than placing the burden on individuals to cope. SMCR forces accountability by making it personal, immediate and consequential.

Would it be expensive initially? Yes, because you’d have to resource content work properly. Would it create uncomfortable conversations? Absolutely, because it would expose how often government digital prioritises organisational convenience over user reality. Would senior people actually have to understand what cognitive load means in practice? Yes, and that’s precisely the point.

The current system is comfortable for everyone except vulnerable users, whose needs go unmet. SMCR would make it uncomfortable for those in power to change things, which is exactly where the discomfort needs to be.

I’ve worked on enough government digital projects to know this won’t happen voluntarily. Organisations don’t impose accountability frameworks on themselves. But understanding what it would look like helps clarify what real accountability means and what we should be pushing for even without a formal regime. Someone specific is answerable for whether the content works for vulnerable users. Decisions are being documented with clear reasoning. Resources are being allocated based on need, not convenience.

That’s not a regulatory fantasy. That’s basic professional practice for work that affects people’s lives. We manage it in financial services. There’s no reason we can’t expect it in government digital.

Workplace Insights coach Adrie van der Luijt

Adrie van der Luijt is CEO of Trauma-Informed Content Consulting. Kristina Halvorson, CEO of Brain Traffic and Button Events, has praised his “outstanding work” on trauma-informed content and AI.

Adrie advises organisations on ethical content frameworks that acknowledge human vulnerability whilst upholding dignity. His work includes:

  • developing the UK’s national drink and needle spiking advice service used by 81% of police forces in England and Wales – praised by victim support organisations
  • creating user journeys for 5.6 million people claiming Universal Credit and pioneering government digital standards for transactional content on GOV.UK
  • restructuring thousands of pages of advice for Cancer Research UK‘s website, which serves four million visitors a month.