Workplace Insights by Adrie van der Luijt

ai detectors miss the point

What EAs need to know about writing with integrity

AI detectors are increasingly used by employers to filter out candidates who have used AI tools to write their CV or cover letter. Here's what management assistants involved in recruitment processes need to know about them.

A senior executive assistant friend recently applied for a role that matched her 15 years of experience. The next day, she received an automated rejection citing “AI-generated application materials” as the reason.

Her polished CV and cover letter were entirely her own work. They reflected her authentic experience, skills and voice without any AI input whatsoever.

She’s not alone. As more organisations implement AI detection tools in their hiring processes, we’re seeing a troubling trend: qualified candidates being filtered out based on flawed assumptions about what constitutes “authentic” writing.

For executive assistants, chiefs of staff and other management support professionals, this matters on two crucial fronts.

First, your own career opportunities may be needlessly limited by these AI detectors.

Second, many of you are directly involved in your organisation’s hiring processes. You need to understand the limitations of these AI detectors before they damage your talent pipeline.

The real issue isn’t AI usage

The current obsession with detecting AI in writing misses the fundamental point. The important question isn’t “Was AI involved in creating this?” but rather “Does this authentically represent the person’s thoughts, experiences and capabilities?”

Even with four decades of professional writing experience as a journalist, executive assistant and senior content designer, I use AI extensively in my writing process.

It helps me write faster, produce more engaging content and create better-structured pieces. It’s another tool in my professional toolkit, that enhances rather than replaces my expertise.

It is not that different from using a spell-checker tool or other software, like Grammarly or readability tool Hemingway Editor (recommended by the Government Digital Service!).

I use bullet points – a big red flag for AI detectors – because I’m a good boy and I’ve been taught to use bullet points whenever listing more than two items.

I’ve always used tools to refine my communication

This reminds me of my journey as a non-native English speaker. As a Dutchman who polished my English to the point where I became a senior copywriter for the Cabinet Office and scored an A+ (Distinction) for my formal spoken English test, I’ve always used tools to refine my communication.

Whether it was editorial feedback, style guides or now AI assistance, these tools helped me express my authentic thoughts more effectively. But they didn’t replace my thinking.

This distinction matters enormously for EAs. Your role has always been about leveraging the best available tools to produce excellent work while maintaining the authentic connection that makes you invaluable.

AI is simply the newest tool in your professional arsenal. It is no different in principle from the transition from typewriters to word processors or from paper calendars to digital scheduling.

How AI detection is harming careers

The executive assistant friend I mentioned earlier eventually discovered what happened. The company’s applicant tracking system automatically rejected applications scoring above a certain threshold on their AI detection tool.

When she finally reached a human recruiter through her network, they acknowledged the system’s limitations but admitted they lacked the resources to manually review all flagged applications.

She’s far from the only victim of these systems. Reports are emerging across sectors of qualified candidates being eliminated without human review based solely on AI detection scores. Particularly concerning: these tools often flag writing that:

  • is exceptionally well-structured
  • uses precise vocabulary
  • lacks the grammatical errors or inconsistencies common in purely human writing
  • follows standard professional formats

In other words, the polished, professional writing style that experienced EAs (and journalist or content designer) have developed throughout their careers can trigger these detection systems, punishing excellence rather than identifying deception.

At the same time, we also read more and more AI-generated content online – whether we realise it or not – and pick up words and phrases that trigger AI detectors.

The complicated reality of “original” writing

During the Covid lockdown, I worked on two projects to prevent fraud in Covid grant applications by developing complex automated checks.

For months, we debated whether to include plagiarism detection in our system. The sticking point? Legitimate applicants frequently used templates and previously successful applications as starting points for their submissions.

This mirrors what happens in job applications. Candidates (and grant applicants) regularly use:

  • templates from professional sites, job centres and local support organisations
  • sample cover letters adapted to their circumstances
  • professional CV writing services (which increasingly use AI themselves)
  • advice and feedback from mentors or career coaches

None of these practices indicate dishonesty. They’re simply tools people use to present themselves effectively. Yet AI detection tools often flag these common practices as “non-authentic,” creating a double standard where human assistance is acceptable but technological assistance is not.

AI as a wellness tool for overworked EAs

Beyond just productivity, AI tools offer something particularly valuable to executive assistants: mental relief in chronically overstretched roles. In a profession where burnout is endemic and workloads have exploded, AI can:

  • automate tedious, repetitive writing tasks
  • provide quick first drafts that you can then personalise and refine
  • offer structuring assistance when you’re mentally fatigued
  • free up mental bandwidth for the truly human elements of your role

This isn’t just about efficiency; it’s about sustainability in demanding careers. If AI can reduce the mental strain in your busiest periods, allowing you to focus on the complex, relationship-based aspects of your role, that’s a legitimate and valuable use of the technology.

What EAs need to know when using AI in their work

For executive assistants, who often draft communications on behalf of executives or create important business documents, understanding how to work with AI ethically and effectively is becoming an essential skill.

First, recognise that using AI tools for assistance is not inherently problematic. Just as you wouldn’t write an important email without spell-check or refuse to use templates for efficiency, leveraging AI to improve your output is simply good professional practice. The key is maintaining ownership of the content.

Second, understand that the most effective approach combines AI’s efficiency with your unique human perspective. When drafting documents with AI assistance:

  • Begin with your own clear direction and thinking
  • Use AI to help structure and refine, not to generate the core content
  • Review and edit to ensure your authentic voice and expertise shine through
  • Always verify facts, figures, and assertions rather than trusting AI output
  • Be transparent about AI usage when appropriate (though you needn’t announce spell-check)

Third, be aware that AI detectors are fundamentally flawed. They produce both false positives (flagging human writing as AI-generated) and false negatives (missing actual AI content that’s been skillfully edited).

Don’t be intimidated by these AI detectors or change your writing to appear “more human”. Focus instead on ensuring the substance reflects your true professional expertise.

How EAs can protect candidates in hiring processes

Many executive assistants play significant roles in their organisation’s recruitment efforts: screening CVs, corresponding with candidates and sometimes making hiring recommendations. This means you can prevent AI detectors from eliminating qualified talent.

If your organisation uses or is considering AI detectors in recruitment:

  • Advocate for human review of all applications flagged by automated systems
  • Educate hiring managers about the limitations and error rates of detection tools
  • Suggest using structured assessments or skills demonstrations rather than relying on detection scores
  • Propose clear communication to candidates about how AI tools are used in your process
  • Remind decision-makers that CV templates, professional writing assistance and now AI tools are all normal parts of how people prepare professional applications

The executive assistant I mentioned eventually secured an even better position elsewhere. Her experience highlights a growing problem that the EA community is uniquely positioned to address. Thoughtless implementation of technology without sufficient human oversight.

The future of AI and integrity in professional communication

As AI becomes more integrated into workplace tools, the distinction between “AI-generated” and “human-written” will continue to blur.

Microsoft’s Copilot is already embedded in Office applications, Google’s Gemini enhances Gmail and Docs and countless other tools offer AI assistance for everything from meeting summaries to presentation creation.

Given this reality, our focus needs to shift from detection to integrity. For executive assistants, this means:

  • Being thoughtful about when and how you use AI tools
  • Maintaining authentic connections in your communications
  • Taking responsibility for the final output, regardless of which tools helped create it
  • Developing a clear personal standard for ethical AI usage in your role

The executive who rejects all AI tools will soon be at a competitive disadvantage, just as those who refused to adopt email or mobile phones were limited in previous technological transitions.

The EA who thoughtlessly delegates their thinking to AI will lose the qualities that make their role valuable: discernment, personalisation and authentic representation of their executive’s voice.

Finding your balance

Every EA will need to find their own balance with these tools. Some may use AI primarily for editing and refinement, while others might leverage it for formatting, research or generating initial drafts to modify.

There’s no single “right” approach. What matters is maintaining integrity and ownership throughout the process.

What does this look like in practice? It means being able to stand behind every document you produce, whether AI-assisted or not.

It means using tools to enhance your capabilities rather than replace your judgment. And it means recognising that your value lies not in producing text that could come from anyone, but in bringing your unique professional perspective to everything you do.

Need for integrity, authenticity and ownership

Today’s fixation on AI detectors will eventually fade as these tools become more integrated into our daily work. What will remain constant is the need for integrity, authenticity and ownership in professional communication.

These have always been hallmarks of exceptional management assistants. They’ll remain so regardless of which technologies come and go.

As one EA with over 20 years of experience told me recently: “Look, I’ve gone from faxes to emails to project management apps to AI. What matters is that when my exec puts their name on something I’ve drafted, it sounds like them, not me, and certainly not some robot. I’m not worried about looking authentic. I’m worried about being authentic.”

Share:

More insights

Workplace Insights coach Adrie van der Luijt

Adrie van der Luijt

For over two decades, I've helped organisations transform complex information into clear, accessible content. Today, I work with public and private sector clients to develop AI-enhanced content strategies that maintain human-centred principles in an increasingly automated world.