The myth of AI ‘telltale signs’: why good writing still looks human
It seems everyone’s suddenly an AI detective. As generative tools become more commonplace, the online world is awash with theories about how to spot AI-generated content – whether it’s the overuse of em dashes, too many adjectives, or that suspiciously tidy Oxford comma.
The reality is, most of these so-called telltale signs of AI writing aren’t telltales at all. In fact, many of them are simply the hallmarks of good writing – clear structure, consistent tone, and grammatically sound sentences that meet a brand’s style perfectly. But now, as non-writers become overconfident in prescribing these traits as ‘AI’, quality craftsmanship is coming under tighter scrutiny – sometimes losing credibility altogether.
In an industry now hypersensitive to “the robots taking over”, this level of confusion has real consequences. PRs are met with growing suspicion from journalists who outright reject pitches they believe to be machine written, while content writers battle overconfident prescribers of ‘AI-generated’ content who are inadvertently devaluing their craft.
At such a crucial juncture, where AI suspicion is compromising the very substance and purpose behind a piece, it’s time to set the record straight. Here, we explore why ‘AI telltales’ aren’t always so black and white, and uncover why the real test of authenticity lies not in the mechanics of language, but in the meaning behind it.
The rise of the AI witch hunt
As tools like ChatGPT, Gemini, and Copilot become everyday companions, a parallel obsession has emerged: spotting the difference between AI content vs human content. Entire Reddit threads, LinkedIn debates, and Chrome extensions promise to reveal the ‘truth” behind text – as if there’s a universal fingerprint that separates human creativity from code. Of course, this can be scrutinised in some parts. For example, if you’re a British brand and your marketing material is littered with Americanisms and em dashes, it’s a pretty good sign you might have leaned too heavily on AI.
However, there is no single linguistic device that can be confidently attributed to AI-generated content. Take the word ‘elevated’, for example, which is now doing the rounds as a ‘definite giveaway’ – it doesn't take a machine to know the word exists, but an entire piece of content can be vetoed at a glance if it’s included. There are even common examples of confusion too, in which the en dash is mistaken for the infamous em dash. While the latter can indicate AI-generated content, the former serves a very purposeful role in a wide range of written formats, yet is often tarred with the same brush by non-writers.
AI detection tools, while improving, also remain wildly inconsistent. The same paragraph can score ‘90% AI’ in one tool and ‘10% human’ in another. Why? Because writing isn’t data alone – it’s rhythm, intuition, and context. The nuance that lives between words is precisely what can’t be codified.
The irony, of course, is that some of the traits most often flagged as ‘AI-like’ – formality, balance, precision – are things professional writers are trained to master. And, as AI algorithms become more sophisticated, they’re intentionally nurtured to write exactly as we do. In other words, they’re picking up tips from us – not necessarily vice versa.
When good writing gets caught in the crossfire
Let’s say you write a beautifully structured article, full of clean syntax and well-paced paragraphs. An AI detector might call that ‘too polished’. You use an Oxford comma? Suspicious. You avoid clichés? Likely generated.
It’s a dangerous narrative. And when readers start equating fluency with automation, we begin to devalue skill. The difference between AI and human writing isn’t in the commas or connectives – it’s in the choices we make as professionals. Humans know when to break the rules, when to lean into feeling, and when a sentence should land like a wink instead of a statement.
We’ve seen firsthand how overdiagnosing ‘AI signs’ can cause misplaced distrust. For example, skilled PR professionals are coming under increased scrutiny for how they pull pitches and thought leadership pieces together. With journalists now shaping strict editorial guidelines that ban the use of AI-generated copy, the stakes are higher than ever. And rightly so – audiences deserve authenticity.
At The Bigger Boat, we wholeheartedly support this stance. Our use of AI has always been guided by two core principles: it should make people faster, and it should help produce work that’s equal to – if not better than – what a human could achieve alone. When opinion, experience, or intuition are involved, that balance tips. No algorithm can replicate lived perspective or emotional intelligence, and in those moments, AI belongs firmly in the background, informing research and shaping ideas, but never writing the words themselves.
Yet, if we reduce the test of authenticity to punctuation marks and paragraph flow, we risk undermining true craft. A misplaced comma shouldn’t be the difference between trust and suspicion – great writing should be judged by its insight alone.
The real telltales to look out for
This stance isn’t to say AI-written copy can’t be spotted. As we’ve covered, there are some definite patterns worth paying attention to – just not the ones you might think. These include:
Lack of specificity: AI often floats in generalities. If something feels oddly non-committal or vague – with no real examples and a lack of sensory detail – it might be made by a machine.
Repetitive phrasing: Bots love a rhythm. Look for the echoing sentence structures and looping adjectives that make copy feel robotic. We don’t mean using the same approach twice, but rinsing overused patterns that strip the writing of its natural ebb and flow. Human writing varies its tempo. It pauses, pivots, and surprises. That’s what makes it feel alive.
Emotional flatness: Even when AI tries to be emotive, there’s often a major disconnect – the idea of feeling, without the instinct behind it. If you don’t feel truly understood or represented, it could be because AI can’t draw on lived experience like humans can.
Context misses: When a piece doesn’t quite understand who it’s talking to, or misuses a reference that any human would catch, it's a good giveaway that it might have been plucked out of thin air. Sometimes, AI even likes to misquote facts too, so look out for outright lies, and always cross-reference sources.
Mismatched styles: Every brand has its own voice rules – maybe it uses an Oxford comma, capitalises key terms, or prefers British spellings. That’s fine. What’s not fine is inconsistency. If a UK brand suddenly says ‘sidewalk’ instead of ‘pavement’, or a tech company showhorns the verb ‘fortify’ into every other sentence, it chips away at credibility. Consistent style signals care, while inconsistency signals automation.
In other words, if it reads like someone who’s never been bored, heartbroken, or inspired wrote it, and it’s inconsistent with house style, it probably wasn’t shaped a ‘someone’ at all.
Why human writers still matter (more than ever)
Good PR and content writing isn’t simply about stringing facts together. It’s about reading the room. The best communicators instinctively know what tone will resonate with a journalist versus a CEO, how to embed digital writing practices into a website article, and how to balance clarity with authority in a press release. The ability to tailor communications and personalise content is where human instinct shines brightest.
Machines don’t have that same instinct. They don’t know when to add warmth, or when silence says more than another adjective. That’s why, at The Bigger Boat, authenticity shapes everything. We use AI where it helps (for data gathering, summarising, or ideation), but we never let it steer brand voice. It’s always a tool, and never a substitute.
What is the difference between AI and human writing?
In short: purpose. Every line a human crafts carries intent – whether that’s to persuade, to comfort, or to connect. AI mimics those patterns, but doesn’t truly understand the stakes behind them.
That’s why we believe in striking a balance. AI can influence the thoughts behind your copy, but only you can make it mean something. Brands that rely too heavily on automation risk losing that connection and eroding hard-earned trust. They risk sounding the same as everyone else – or worse, saying nothing at all. Equally, those with a false sense of confidence in identifying AI content can punish the very qualities that make human writing powerful, reducing skilled work into a hollow exercise of box ticking.
Value substance over suspicion
As the industry continues to evolve, it’s tempting to neatly categorise AI as either a threat or a saviour. In reality, it’s neither. It’s merely another tool, and one that still depends on human intelligence to make sense of the world it’s trying to describe.
At The Bigger Boat, we’re proud to defend the art of writing in its truest sense: thoughtful, emotional, and alive. While we know AI plays a significant role in supporting speed and creativity, we also know that real words, written by real people, deliver the hardest-hitting impact.
So, the next time you’re tempted to analyse whether something ‘sounds AI’, ask a different question instead: Does it sound like it truly means something? If it does, its purpose has been perfectly served.
If you want to keep up to date with the crew, don't forget to sign up to our newsletter to benefit from digital marketing expertise, as well as exciting opportunities to improve your business' performance.

Written by Carrie Webb
A life-long lover of the written word, Carrie is your go-to for compelling content that resonates with your audience.
News and insights
AI is only as good as its inputs: why our AI toolkit is essential to protect your brand
Read more
News and insights
Trust, tone, and the human touch: balancing expertise and AI in content creation
Here, we explore our belief that brand is not simply a creative exercise – it's a powerful business tool that amplifies impact – and highlight how it can be measured and proven with the right data.
Read more