As the AI revolution enters its second year, automated content is becoming increasingly commonplace. But just how easy it to differentiate between human and non-human text?

Let’s kick off with a confession.

A couple of Sundays ago, my 12 year old came hurtling downstairs in a state of panic saying she’d forgotten to do a piece of homework which was due in the next day. The exercise was to write a 300 word essay on ‘The Hound of the Baskervilles’. I promptly switched off Antiques Roadshow (because, since the dawn of time, all homework-related crises occur during Antiques Roadshow) and said the three words which automatically condemned me to the fiery pits of ‘Bad Mum’ hell.

‘Just use ChatGPT’.

(Stick with me - this anecdote’s going somewhere, I promise).

To be fair, I had also advised she should only use it to create a first draft, then edit it thoroughly, which she did do to some extent, but had failed to omit the AI-suggested expression ‘analytical prowess’.

I asked her if she knew what ‘analytical prowess’ meant. She didn’t. I also asked her if she’d ever used that particular expression in her work before. She hadn’t. I suggested (ever so diplomatically as it was my stupid idea to use GPT in the first place) that if she didn’t know what it meant, perhaps the best thing to do would be to leave it out or her teacher might quickly cotton on to the fact that these were not her own words.

While schools and universities are becoming increasingly savvy to AI content, utilising detection tools such as Turnitin to spot non-original content, the rest of us still tend to accept at face value that what we read online has been written by a real-life person. But since ChatGPT entered the mainstream at the start of 2023, can we still have the same degree of confidence? And just how easy is it to detect what’s human content and what isn’t?

How can we spot AI content?

I want to make one thing clear. This article isn’t about wagging a judgemental finger and preaching the rights and wrongs of using ChatGPT. If it was, I’d be one heck of a hypocrite as I bloody love ChatGPT. Nor does this article go into granular detail about language model algorithms and analytics (other blogs have got that covered). This is simply a guide to spotting the telltale signs of AI-generated text based on my own experience to date.

How does ChatGPT get its lingo?

ChatGPT is trained on vast amounts of text data which it uses to generate human-like content based on common phrases, patterns and structures used in human language. While we should continue to celebrate the ingenuity of this, one problem arising from this data training is the fact we start to see the same terminology rehashed over and over again, especially when the chatbot has been fed a bare bones prompt.

Let’s play ChatGPT bingo

After a year of using ChatGPT, and to a lesser extent, Bing’s Copilot and Google’s Bard (plus others), I’ve started spotting some AI buzzwords that crop up time and time again. Rather than listing them all below in bullet points, I thought it’d be fun to stick them on a bingo card – because who says AI-spotting can’t be fun?

Bingo 4 2 image

I hope this goes without saying, but finding these examples in a piece of content does not automatically mean it isn’t authentic. There are some great words included in the table above and I’ve certainly been guilty of sliding a few of these into my content over the past 12 months or so. But the fact remains they do appear in AI outputs a lot.

Let’s look at an example:

Digital marketing GPT example image

Not exactly a full house on the bingo card, but there are some flags that suggest this was not written by a human. US spellings are a good indicator (particularly when included on British websites!), as are Oxford commas and, of course, those words and phrases that you’d only see now and again pre-GPT but are now becoming increasingly prevalent as a result of AI vernacular, such as leverage and leveraging.

Here’s another example. This is a section of a brief that was sent to my boss the other day. The language and presentation raised his suspicions, so he passed it onto me, the resident AI detective. Take a look at a sample screenshot of that brief and see what you think:

Brief GPT image

Using our ChatGPT Bingo card as a guide, I can see lots of US spellings, even though the brief was from a British company. There are also bolded bullet points and capitalisation of every word in the headers. Oxford comma? Yep, that’s there too. Oh, and what word can I see nestled within the text?

‘Leverage’.

Quelle surprise.

How to check AI generated content

While our little bingo card is hardly the most reliable way to spot AI content, there are detectors out there that can give you more certainty either way. A couple of tools I use a lot are Copyleaks and GPTZero, both of which can identify what it perceives to be AI generated copy.

I ran the text from the brief in the previous section through Copyleaks and – unsurprisingly – all of it was flagged as AI.

Copyleaks 1 image

Is this a bad thing though? Does it actually compromise the purpose of the brief? After all, this is not a 12 year old’s English homework. Nobody will be assessing it based on authenticity and originality. But I do believe that the more AI content becomes easily recognisable, the more it will bring a business’s credibility into question…and it’s already starting to get people’s backs up:

Rachel Klaver Linked In post image

How to add the human touch to AI content

The easiest way to cheat the chatbot detectors and keep your authenticity halo well-polished is to use AI as sparingly as possible. Let it loosen up your writers block by all means, but at the end of the day, your audience want to hear from you – not a machine.

Search engines are not a fan of AI content either and while E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) is not a ranking factor, Google and co. don’t want to prioritise non-authentic, robot-produced creations over human content.

If you are using AI to write copy, here are some ways to keep it reader and search engine-friendly:

  • Thoroughly factcheck and edit all AI output. Chatbots still have the ability to hallucinate (i.e. provide incorrect information that looks plausible).
  • Keep the tone distinctively ‘yours’. If you spot words in the output you wouldn’t normally use, swap them for words you would use (i.e. if you wouldn’t normally use ‘leverage’, don’t include it!).
  • Check spellings. GPT defaults to US English which may not be appropriate for a British audience.
  • Refine your prompts. Give ChatGPT a bland, generic input, expect a bland, generic output in return.
  • When creating a prompt, add detail regarding your target audience. The more the chatbot ‘understands’ about the purpose of what you’re trying to achieve, the more relevant and personalised its output.
  • Train your AI tool to write like you (tools such as Writesonic are great for this).

Final thoughts

AI content is not always the devil in disguise. It has its uses and can be a trusty sidekick for creating great content. But whether you’re a 12 year old copying out a ChatGPT summary of Hound of the Baskervilles or a marketer producing articles featuring words like ‘elevate’, ‘leverage’ and ‘tapestry’, the more commonplace AI-generated text becomes, the less chance you have of standing out from the crowd.

Why not try out the ChatGPT Bingo card today and see how much AI you can spot online? Anyone achieving a full house wins a pony.

Back to blog
Meet the author ...

Anna Heathcote

Content Manager

Based way up on the Northumbrian coast, Anna uses her creative copywriting expertise and SEO experience to ensure clients have fresh, relevant and optimised content on their ...