Workplace Insights by Adrie van der Luijt

the AI workplace paradox

Why employers simultaneously expect miracles from technology while devaluing the humans who use it

Analyse the contradictions in the WEF Future of Jobs report where employers simultaneously predict AI transformation whilst expecting humans to show more creativity and resilience – all for less pay and recognition.

Last week, I found myself in a project kickoff meeting with a potential government client. Halfway through my presentation on content strategy, the programme manager interrupted me: “We’re planning to use generative AI for most of this. We’ll still need you, of course – but mainly to check the AI outputs. We should be able to cut your contract days by half.”

I’ve had variations of this conversation at least a dozen times in the past year. The same client organisations that marvel at AI’s capabilities also seem convinced that human expertise in directing, evaluating and refining these tools requires minimal skill or compensation.

This cognitive dissonance isn’t just frustrating for contractors like me. It reflects a fundamental misunderstanding at the heart of workplace transformation that the recent World Economic Forum Future of Jobs 2025 report throws into sharp relief.

The contradictions in the WEF data

The WEF report, based on input from 1,000 global employers, paints an optimistic picture: technological change will create 170 million new jobs by 2030 while displacing 92 million, resulting in net growth of 78 million jobs globally.

If this sounds familiar, it should. Similar predictions accompany every technological shift. What’s more revealing is the contradiction at the report’s core:

  • Employers identify AI and automation as the most transformative technological trends
  • They simultaneously predict dramatic growth in jobs requiring “human-centred skills” like creative thinking, resilience, and flexibility
  • Yet the same employers continue deploying these technologies explicitly to reduce dependence on human workers


Having worked on digital transformation projects for three decades – from early HTML to sophisticated AI systems – I’ve observed this pattern repeatedly.

What’s different this time is the speed of deployment and the breathtaking disconnect between corporate expectations and workplace realities.

The convenient myth of job creation

The history of technological displacement offers a cautionary tale about rosy job creation predictions.

When I worked on early digital government initiatives in the 2000s, the promise was that automating administrative tasks would free civil servants to focus on “higher-value work.”

In reality, this often translated to reduced headcounts, increased workloads for those who remained, and a steady transfer of expertise from public servants to private contractors (like me) who designed the very systems making us “indispensable.”

The WEF prediction of net job growth deserves particular scepticism because it comes primarily from the employers driving transformation.

These are the same organisations that consistently frame staff reductions as “efficiency improvements” rather than what they often are: transferring the burden of work onto fewer people while extracting more value.

The skill-shifting game

Perhaps the most insidious aspect of this narrative is the ever-shifting definition of “valuable skills.” According to the report, the most in-demand skills by 2030 will include:

  1. AI and big data
  2. Network and cybersecurity
  3. Technology literacy
  4. Creative thinking
  5. Resilience, flexibility and agility


Having weathered multiple “skill revolutions,” I’m particularly wary of how employers frame resilience and flexibility as teachable skills rather than what they often become: euphemisms for tolerating increasingly precarious working conditions.

When I led training workshops at government agencies, “resilience training” typically arrived shortly before redundancy announcements. “Flexibility” was code for accepting unpredictable schedules and expanding responsibilities without corresponding compensation increases.

The dividing line nobody talks about

What the WEF report politely sidesteps is the growing division between roles with genuine agency over AI tools and those simply subjected to them.

Throughout my career in digital transformation, I’ve consistently observed how the same technology creates dramatically different experiences depending on staff positioning. Those who direct and shape technological tools typically gain productivity and job satisfaction. Those who are merely subjected to automated processes often find their work increasingly monitored, measured and devalued.

The WEF prediction that “broadening digital access is expected to be the most transformative trend” fails to acknowledge this critical distinction. Access itself doesn’t guarantee benefit if you lack the positional power to meaningfully direct technological tools.

The international perspective that complicates everything

The WEF report acknowledges “geoeconomic fragmentation” as a significant trend but underplays its impact on jobs and skills.

Having navigated international workplace cultures throughout my career – from Dutch directness to British circumspection – I’ve seen how technological transformation plays out differently across cultural contexts.

During my time working between London and Amsterdam offices, I observed how identical technology implementations produced entirely different workplace impacts based on cultural norms around hierarchy, work-life boundaries and communication styles.

The Dutch team integrated automation tools within their existing 9-to-5 work culture. The British team used the same tools to justify 60-hour workweeks and constant availability. The technology was identical; the human context transformed the outcome.

This international complexity means predictions about global job trends are fundamentally limited. Technology never enters a neutral environment – it lands in specific cultural contexts that dramatically shape its impact.

The accessibility blind spot

Perhaps most concerning is what I’ve called in previous writing the “AI accessibility gap.” As someone who has worked extensively on government digital services, I’m acutely aware that technological advances often leave vulnerable users behind.

The WEF report’s optimism about digital access ignores significant concerns about who benefits from these transformations. Voice interfaces struggle with non-standard speech patterns and accents.

Image recognition produces inconsistent results for diverse users. Language models generate content that creates particular challenges for those with cognitive disabilities.

Each technological advance simultaneously creates new barriers for those most likely to depend on accessible digital services. The WEF report’s silence on this issue is deafening.

Practical navigation strategies for the real world

So how do we navigate this contradictory landscape? Based on three decades working at the intersection of technology and human systems, here are the strategies that actually work:

  1. Value positioning matters more than skills
    Rather than endlessly chasing the latest technical skills, focus on positioning yourself where you have agency over technological tools rather than being subjected to them. This often means roles that involve directing, evaluating, or deciding where and how to apply AI tools, not just implementing them.

  2. Develop technology translation abilities
    The most valuable professionals I’ve worked with aren’t necessarily the most technically skilled, but those who can translate between technical and human systems. They understand both the capabilities of AI tools and the human contexts where they’ll be deployed. This “bridging” ability consistently remains valuable through technological shifts.

  3. Cultivate genuine cross-generational collaboration
    The most effective teams I’ve led combine digital natives who intuitively grasp new technologies with experienced professionals who understand the deeper patterns of organisational behaviour. This cross-generational approach creates resilience against both technological naivety and outdated thinking.

  4. Maintain professional boundaries
    As demands for “flexibility” increase, clearly defining what quality means to you and being willing to walk away from work that doesn’t allow for maintaining those standards becomes essential. No client is worth sacrificing your wellbeing for – something it took me too many years to learn.

  5. Focus on strategic value, not tactical output
    When clients question your value, shift conversations away from tools and toward outcomes. What matters isn’t whether content was produced by AI, but whether it achieves business objectives and meets user needs. Document your strategic contributions rather than just tactical outputs.

The future isn’t machines vs humans

The uncomfortable truth is that some organisations will always find ways to devalue work, regardless of technological shifts. AI is just the latest excuse in a long history of excuses.

What gives me hope is seeing professionals who refuse to internalise these messages – who understand that their value lies in strategic thinking, experience, and professional judgment, not just in producing outputs quickly.

As one former colleague told me recently, “AI hasn’t changed the fundamental truth of our profession. Good content takes time, thought and care. The tools change, but the craft remains.”

Workplace Insights coach Adrie van der Luijt

Adrie van der Luijt is CEO of Trauma-Informed Content Consulting. Kristina Halvorson, CEO of Brain Traffic and Button Events, has praised his “outstanding work” on trauma-informed content and AI.

Adrie advises organisations on ethical content frameworks that acknowledge human vulnerability whilst upholding dignity. His work includes projects for the Cabinet Office, Cancer Research UK, the Metropolitan Police Service and Universal Credit.