
Workplace Insights by Adrie van der Luijt
In my work with content designers, copywriters, and even authors in fields like history, I’ve encountered a fair share of absolutist viewpoints about AI ethics. You’ve probably heard them: “AI can never be ethical, because it uses stolen data.” “Using AI in copywriting is cheating.” Others insist “AI is wrong most of the time; it just makes stuff up.”
Such all-or-nothing stances are understandable as gut reactions to rapid change, but they don’t hold up under closer scrutiny. More importantly, they’re not helping us move forward. As one colleague wisely noted, polarised extremes rarely lead to constructive outcomes: “One can be alarmed by AI’s [issues] and still use it ethically and extensively. We don’t need to love or hate AI.”
In that spirit, let’s dissect these absolutist claims one by one with a realistic, pragmatic (yet critical) perspective.
This AI ethics argument goes like so: AI is essentially built on pilfered intellectual property, the “hard work of generations of creators” scraped without consent and is therefore irredeemably unethical.
It’s true that today’s generative AI models are trained on vast swathes of internet data, much of it copyrighted or obtained without explicit permission. This raises valid ethical and legal concerns.
We should absolutely demand better practices from AI developers, such as transparency regarding training data and fair compensation for creators whose work is used. However, declaring “AI can never be ethical” and refusing to engage further is a dead end.
I’ve read threads like this one, eloquent, raw and full of pain, many times now: “You can’t replace that by hoovering up my words. The experience that made it possible to write that article can’t be scraped and spat back out. Developing the work is the work.”
That last line. It’s true. And it’s why I don’t believe generative AI replaces expert practitioners. It can’t.
But I also don’t believe that our only options are to either reject AI wholesale or surrender to it completely.
The people who write these threads, skilled, experienced professionals, aren’t Luddites. They’re canaries in the coal mine. They’re warning us that the business models behind some of these technologies are exploitative. That their own careers are at risk of being undermined by half-baked AI evangelism and corporate greed.
And they’re right to be angry.
But where we may differ is in what we do next. I don’t believe in disengaging. I believe in fighting for ethical use, clear regulation and responsible design, because if we walk away, the people who don’t care about ethics will define the future for us.
AI isn’t going anywhere and outright rejection won’t undo the reality that the technology exists and is increasingly integrated into tools and workflows.
Instead, a pragmatic approach is to push for ethical guidelines and governance in AI usage. For example, in the copywriting industry, we’re developing a Copywriter Code that calls on writers to “declare how and when AI tools are used” in projects.
By being transparent about AI’s role, we give clients and audiences the honesty they deserve and help prevent the kind of “AI plagiarism” or fraud that sceptics fear. In short, yes, AI relies on vast amounts of data (some of it dubious in origin), but we can advocate for making future AI development more ethical while using the current tools responsibly and transparently.
I’ve heard this especially from seasoned copywriters and content designers: the idea that if you let an AI assist with writing, you’re somehow cheating or producing “fake” work. Let’s unpack that.
If a writer secretly has an AI write an article and then passes it off as entirely human-made, that is dishonest, akin to outsourcing your homework and claiming credit. No reputable professional should do that and our industry standards should make clear that misrepresenting AI-generated work as purely one’s own is unethical. (Again, this is why I support explicit disclosure of AI involvement).
But using AI at some stage in the writing process is not inherently cheating. It depends on how it’s used and whether you’re transparent. Consider this: we writers already use plenty of tools and aids. Is using a spelling and grammar checker “cheating”? How about using Google to research facts or a thesaurus to find the right synonym? By today’s standards, these are normal, even expected parts of the craft.
AI can be viewed as an advanced extension of our toolset, a brainstorming partner, a first-draft generator, a way to overcome writer’s block or generate variations to spark your creativity. What matters is that the human writer remains in control. The skilled copywriter must curate, edit, fact-check, and infuse the content with human insight and nuance that AI alone can’t provide.
In fact, even critics of AI admit that an “‘AI-free’ badge is unrealistic” because “there is not a single content tool that is free of AI. If you write in Google Docs, use spellcheck software and research online, you are using AI. It’s an oversimplification.” In other words, drawing a hard line that “any AI = cheating” doesn’t reflect how technology is already interwoven in content creation.
The key is intent and transparency. If you use AI to speed up a mundane task or generate a rough draft and then put in your expertise to refine it, you’re not cheating. You’re working efficiently (and plenty ethically). You’re still responsible for the final output.
By contrast, if you have AI spew out an article and you copy-paste it to a client without edits or disclosure, you’ve crossed the line. My stance is that we should normalise responsible AI assistance, with guidelines like our Copywriter Code’s AI disclosure rule, rather than pretend that true creatives must vow never to touch AI.
This realistic approach maintains trust and quality while still embracing useful new tools.
Anyone who’s experimented with early AI tools has seen them blurt out confident nonsense. I’ve written articles myself warning how AI can be a “blatant, unrepentant liar” that doesn’t even know when it’s lying. And I stand by that caution: even today, AI outputs should never be blindly trusted without human verification.
We’ve all seen the headlines about chatbots inventing fake citations or spouting inaccurate information. As an example, one AI-written article on men’s health was found to be riddled with errors and “persistent factual mistakes” that a knowledgeable human quickly spotted. And anecdotally, AI can produce very convincing-sounding explanations that are utterly false, like a detailed (but completely wrong) analysis of a song, which one commenter noted “would be very convincing to somebody who doesn’t know anything… even though it’s wildly inaccurate.”
This is a real problem: as AI’s writing gets more fluent, we risk losing the ability to tell when it’s wrong if we’re not careful.
That said, the claim that “AI is wrong most of the time” or “hallucinates all the time” is outdated. Like any technology, AI is improving rapidly. Its accuracy depends heavily on how you use it: garbage in, garbage out. If you give a poor prompt or ask it about esoteric facts, you might get nonsense.
However, with improved models and techniques, reliability has increased significantly. For instance, the latest GPT-4 model was evaluated on a complex task (summarising documents) and it produced correct summaries 97% of the time, with only a ~3% hallucination rate. That is a huge leap in accuracy compared to the early GPT-3 days.
Even outside specific benchmarks, users have noticed that modern AI is far more likely to say “I don’t know” or refuse to answer when unsure, whereas older versions might have cheerfully made stuff up. In short, AI can still be wrong, sometimes spectacularly, but it’s not true that it’s always wrong. Dismissing it outright due to past flaws ignores how quickly it’s evolving.
The smarter approach is to use AI’s enhanced capabilities while maintaining our critical thinking. Double-check the AI’s outputs in high-stakes situations, provide quality data and prompts, and don’t use it in ways where an occasional error is unacceptable. With those precautions, AI becomes a valuable assistant rather than an unreliable liar.
Absolutist positions like “AI can never be ethical” or “we should never use AI” might feel righteous, but they don’t help those of us who create content in the real world. The reality is that AI is already here in our tools, our search engines and our workflows. It’s not going to magically disappear.
In fact, people are increasingly using AI as a sort of search engine or research assistant. Global number of searches per day in 2024, with ChatGPT (~37.5 million queries) already handling a significant volume of queries compared to traditional search engines. Millions of users now turn to systems like ChatGPT to ask questions and get information, in addition to (or instead of) Googling.
Whether we like it or not, this is how the audience behaviour is shifting. Our job as writers and content professionals is to adapt and uphold standards in this new context, rather than futilely wish it away.
So what does a realistic, yet critical approach to AI look like? It starts with engagement, rather than blanket rejection.
I’ve been working with my professional community (ProCopywriters) to establish a Code of Conduct that treats AI as a tool to be used responsibly. This means encouraging high standards and boundaries: for example, copywriters who sign the code pledge to “make reasonable efforts to ensure that clients and employers understand where AI has been used in the finished work (if at all)”.
Imagine a world where it’s standard practice to footnote or mention if a piece of content had AI assistance, the way we might cite sources or disclose a ghostwriter. Such transparency turns AI from a dirty secret into just another acknowledged component of production. It builds trust: clients know what they’re getting and writers maintain credibility.
Equally important is focusing on the value human creators add. AI is powerful at pattern replication and speed, but it still lacks genuine creativity, emotional intelligence and the lived experiences that humans draw on to create meaning.
My stance has always been that writers should play to our strengths: use AI for the grunt work if you want, the first draft, the tedious bits, but then elevate that material with human insight, humour, empathy and critical thinking. The end result can be better than either AI or humans alone could achieve. And if it isn’t, then that’s on us as the professionals to demonstrate why our skill matters.
Rejecting AI outright doesn’t make it go away; it just means sidelining yourself. Using AI pragmatically, on the other hand, can augment your abilities (while freeing you from some drudgery), as long as you remain vigilant about accuracy and ethics.
Finally, we need to keep the conversation going. I never claim that my approach is the only way. I welcome discussion and debate on how to navigate AI’s impact on our industry. There are genuine concerns about jobs, originality and morality that deserve attention. However, I firmly believe that engaging with the technology is a more effective way to address those concerns than standing on the sidelines with arms crossed. AI is here to stay, whether we like it or not.
Rather than shouting “never!” and being bypassed by those who do adopt the tech, let’s shape how AI is used. We can be the voices that demand ethical use, set standards (like our evolving Copywriter Code) and ensure our clients and readers receive the quality and honesty they expect.
In summary, it’s time to move past the absolutism. No, AI isn’t a flawless angel. It has serious issues that we must continue to call out. But it’s also not a demonic force that creative professionals must exile from their lives.
The truth, as usual, is somewhere in the middle: AI is a powerful new tool, with pros and cons, and it’s up to us to use it in a way that is transparent, ethical and effective. By doing so, we protect the integrity of our work and embrace innovation. Let’s leave the “never ever” mindset behind and focus on developing practical guidelines for coexisting with AI in our industry. That’s not naïve, but necessary.
Adrie van der Luijt is CEO of Trauma-Informed Content Consulting. Kristina Halvorson, CEO of Brain Traffic and Button Events, has praised his “outstanding work” on trauma-informed content and AI.
Adrie advises organisations on ethical content frameworks that acknowledge human vulnerability whilst upholding dignity. His work includes: