Workplace Insights by Adrie van der Luijt

say Cheese

AI, accuracy and the danger of "close enough"

Google’s Super Bowl ad for its Gemini AI was meant to impress. Instead, it became a cautionary tale. The ad showcased Gemini identifying objects, including Dutch Gouda cheese, but it got key details wrong. It took cheese enthusiasts just hours to spot the inaccuracies, forcing Google to edit the ad post-launch.

Google’s Super Bowl ad for its Gemini AI was meant to impress. Instead, it became a cautionary tale. The ad showcased Gemini identifying objects, including Dutch Gouda cheese, but it got key details wrong. It took cheese enthusiasts just hours to spot the inaccuracies, forcing Google to edit the ad post-launch.

However, Google’s response was even more concerning: it claimed the AI hadn’t made a mistake, but rather had sourced incorrect information from multiple sources. In other words, the AI wasn’t at fault. Its data was. This highlights a fundamental issue with AI: it cannot distinguish fact from fiction. And in the workplace, that can be a serious problem.

AI versus search engines

This isn’t just about cheese. Ofcom recently found that more than 30% of UK internet users now rely on AI instead of search engines to find information. That’s a profound shift.

Traditionally, search engines return multiple sources, allowing users to evaluate credibility and cross-check facts. AI, on the other hand, generates a single, polished-sounding response – whether it’s correct or not. The risk? People may take AI-generated content at face value, assuming it’s accurate simply because it sounds authoritative.

For businesses, this shift has serious implications. Employees using AI tools to draft reports, summarise meetings, or make decisions could be basing their work on subtly incorrect or misleading information. A misquoted contract clause, a distorted market trend, or an inaccurate regulatory update can lead to costly mistakes. When AI gets things almost right, it’s easy to miss the errors until they become a problem.

Efficiency paradox

Cheese market in Gouda, The NetherlandsAI is already changing the way we work. Many professionals now use AI for tasks that were once manual: writing emails, generating insights, even analysing data. This creates an efficiency paradox: AI speeds up processes, but if it introduces errors, we end up spending more time fixing them. Worse still, if those errors go unnoticed, they can lead to reputational damage, financial losses, or compliance issues.

Take corporate communications as an example. AI tools can generate press releases, social media posts, and internal updates in seconds. But if an AI confidently inserts the wrong date, misattributes a quote, or misinterprets a legal term, the cost of that mistake far outweighs the time saved. This is why human oversight isn’t just important – it’s essential.

AI is a co-pilot, not a replacement

As a Dutch person, I take my cheese very seriously. Gouda isn’t just another cheese – it’s a point of national pride. So when AI gets even that wrong, it’s hard not to wonder what other mistakes are slipping through unnoticed.

The lesson from the Gouda incident is clear: AI is a powerful tool, but it needs fact-checking. Businesses should treat AI as a co-pilot, not a replacement for human expertise. That means building in verification steps, training employees on AI limitations, and fostering a culture where people feel responsible for checking AI-generated work before acting on it. It also shows that management support professionals still play a vital role in providing critical thinking.

Management support professionals have a real opportunity to add value here too. That’s nothing new, of course. Around thirty years ago, I was given a job by a law firm in the City of London after I – as a temp – spotted a critical error in contracts that had supposedly been examined by highly-paid lawyers in some of the UK’s leading firms. You’re never too junior to question something that doesn’t seem quite right – even in a ridgedly hierarchical law firm.

There’s no doubt that AI will continue to reshape the workplace. But if we let it become our only source of truth, we risk making decisions based on polished-sounding inaccuracies. In an era where more people trust AI over search engines, businesses must ensure that trust is well-placed.

Share:

More insights

Workplace Insights coach Adrie van der Luijt

Adrie van der Luijt

For over two decades, I've helped organisations transform complex information into clear, accessible content. Today, I work with public and private sector clients to develop AI-enhanced content strategies that maintain human-centred principles in an increasingly automated world.