It’s not surprising that brands everywhere see AI-generated content as a shortcut to increased sales and engagement.
Quantity at speed is AI’s superpower. With the right information and instruction, AI is capable of whipping up articles, web pages, scripts and presentations in a flash. ChatGPT alone generates some 100 billion words per day, equivalent to about a million novels.
But the opportunity generative AI (GenAI) poses for organisations is also the root of many risks. Since large language models (LLMs) work by analysing patterns in vast datasets and predicting the word most likely to come next in a sequence, they can produce copy that is:
- Inaccurate – potentially damaging your reputation and stakeholder relationships
- Derivative – potentially attracting copyright infringement claims
- Biased – potentially alienating your readers and resulting in litigation or fines
- Generic – potentially missing the mark with your target audience and ranking poorly on search results
Without quality control, organisations risk churning out a barrage of “blah, blah, blah” that erodes value and wipes out any time-saving gains. That’s why human oversight is vital in any content creation process involving AI.
Here are five ways to get the benefits of this technology without sacrificing quality, authenticity and integrity.
1. Provide high-quality inputs
Provide the AI model with language references, such as tone of voice guidelines, a brand identity manual or examples of what ‘good’ looks like.
The phrase ‘you get out what you put in’ could have been written for GenAI. To take advantage of its powers, you need to invest in training your content creators on how to craft precise, contextually relevant prompts and troubleshoot when the AI model simply doesn’t ‘get it’. Rather than settle for a low-quality draft, they can learn how to use the tool to incrementally improve quality through a process of refinement or by breaking a complex task into a series of prompts that build upon each other. One study found that a well-constructed prompt enhanced the quality of an LLM response by about 58% on average.
Providing the AI model with language references, such as tone of voice guidelines, a brand identity manual or examples of what ‘good’ looks like, is another key way to set it up for success. If you haven’t yet defined your brand personality, make this a priority as it will ensure all content, whether human or AI-generated, sounds consistently authentic. By feeding these inputs to the model from the start, you can avoid generic language that fails to resonate with your target audience.
2. Fact-check all claims
Skewed and inaccurate content is an inherent risk of AI-generated text. Rather than retrieving information, like a human would, LLMs use patterns in their data sets to generate text based on the probability of one word following another. So, they can spout falsehoods that have been circulating online, quote outdated information and fabricate wildly inaccurate statements that sound plausible.
These ‘hallucinations’ can be as harmful to authors as readers. Two US lawyers recently admitted that they had inadvertently included cases concocted by AI in a lawsuit against a supermarket giant, prompting their firm to remind its attorneys that failure to verify AI-generated claims may result in “court sanctions, professional discipline, discipline by the firm (up to and including termination), and reputational harm”.
AI-generated content should never be published without a human fact-check. Ask your LLM to cite its sources, and check those sources are reputable and up-to-date, ideally by cross-referencing them with other credible sources.
3. Screen for bias
GenAI is trained on data that was, at least originally, generated, selected, curated, processed and distributed by humans – a species prone to bias. When an LLM mirrors human prejudice, it creates data that it, and other models, can reference in the future, potentially perpetuating and amplifying biases globally.
In one recent study published in Nature, researchers found that LLMs embodied “covert racism in the form of dialect prejudice” against speakers of African American English. As a result, in test scenarios, LLMs were more likely to suggest these individuals be given less prestigious jobs, be convicted of crimes and even sentenced to death.
It’s important that your content creators know how to spot bias in AI content, including subtle ‘microaggressions’ which, when perpetuated and magnified by the technology, could add up to more blatant discrimination in the future. Simply being aware of the risk of unconscious bias is a good starting point, as is training on writing neutral prompts.
4. Add the human touch
What makes content persuasive and memorable is difficult to define. A thunderbolt insight, a personal experience, a nuanced understanding, a playful choice of words – these features can transform prose from mind-numbing to mind-blowing.
Sometimes, a human writer will break the rules of grammar and convention to change the narrative rhythm in case a reader is dozing off. Like presenting a clause as a sentence. Or turning a cliché into a pun. Or posing a question. Such devices can be jarring, but their unpredictability can also make a reader sit up and take notice of key messages.
As well as screening for errors and bias, human editors need to look out for common AI giveaways such as clichés, robotic rhythm and bland, unnatural or irrelevant statements.
By definition, this human quality is not in GenAI’s wheelhouse. As a prediction engine, its output is generic, following the linguistic path of least resistance. While it can obey tone-of-voice instructions, it has no understanding of why these are in place or of the subtle, and evolving, context that feeds into your brand identity. While it can generate ideas by remixing the Lego blocks of language, it lacks the emotional intelligence and inspiration to spot a good idea and see it through, having no initiative or desire to connect.
That’s why humans will always play a vital role in content quality assurance. As well as screening for errors and bias, human editors need to look out for common AI giveaways such as clichés, robotic rhythm and bland, unnatural or irrelevant statements. Has the AI model generated 1,000 words that state the obvious? Has it missed the point altogether? Is it writing in US English when your target audience uses UK English? Consider whether quotes, case studies or a first-person voice would strengthen your narrative as AI won’t provide these unprompted. And pay particular attention to introductions and conclusions, where formulaic text could mean your content is instantly forgotten – or never read at all.
Google does not penalise AI-generated content per se, but its rankings do prioritise “helpful, reliable, people-first content”. Who knows if content is helpful, reliable and people-first? Why, people, of course!
At Stratton Craig, we can help you create content that counts – with or without AI. Talk to us today about how we are keeping pace with this revolutionary technology while maintaining our high standards of content quality.