Note: We focus on ChatGPT, one of many generative AI tools available, in this post, but the content pallies to all generative AI writing assistants.
Content writers and copywriters who produce content with generative gen AI and fail to check and edit before publishing are taking huge risks. Here's why generative AI needs human editors.
If you give more than 50 or 60 percent of your working hours to writing fresh, engaging, and accurate content, you probably know what’s wrong with generative AI tools like ChatGPT the first or second time you used it.
Its writing is bloated and non-specific
The writing style is bland
It doesn’t say anything new
It’s not able to write with nuance.
Generative AI tells lies
ChatGPT and other language learning models (LLMs) experience what their developers term "hallucinations" - instances where the generated text may seem believable but is actually incorrect or nonsensical. You can find examples of these hallucinations in this LLM failure archive. These errors make it impossible for all but geniuses to know if the output is accurate or not.
ChatGPT and other language learning models (LLMs) experience what their developers term "hallucinations" - instances where the generated text may seem believable but is actually incorrect or nonsensical. You can find examples of these hallucinations in this LLM failure archive. These errors make it impossible for all but geniuses to know if the output is accurate or not.
As an aside, we don’t like the way creators of LLMs use the word hallucinations for LLM-based AI’s errors or lies. On the euphemism hallucintions, taking head of George Orwell view on honest language is essential. Orwell wrote that honest and clear language matters, because vague writing can mislead or worse yet be used as a powerful tool of political manipulation. In this case, we think using a word like “hallucination” is cultural and economic manipulation.
LLM creators need to be more open about their products' capabilities and make it clear that they can produce inaccurate information or get the facts wrong. The word "hallucination" doesn't capture LLM flaws.
Even using ChatGPT as a research assistant is highly risky, so we find it deeply concerning that so many entrepreneurs and marketers rely on it wholesale for content. Even well-known consulting firm McKinsey and Company misleads readers by saying generative AI makes “new” content: “It can create new content, including audio, code, images, text, simulations, and videos.”
Generative AI uses old content, may be biased, is bland
ChatGPT and other language learning models (LLMs) have the potential to generate responses that may exhibit bias or discrimination towards gender, race, or minority groups.
Trained writers - and that includes anyone who’s taken one or two good online courses through to studying journalism or creative writing - find ChatGPT's writing to be monotonous and lacking creativity, comparable to a report written by an uninspired fifth-grade student. It often fills the content with unnecessary filler copy or fluff to compensate for its inability to provide fresh insights.
This is in part because generative AI extracts from existing web content by scraping search results for a topic; it then produces a piece of content that matches the prompts input by a user. The results are far from unique.
In fact, you probably know that a growing list of authors suing ChatGPT because it was trained by copying their books Authors involved in the lawsuit include Margaret Atwood, Stephen King, John Grisham, Jodi Picoult, Roxane Gay, Suzanne Collins, Doug Preston George R.R. Martin, Jonathan Franzen, Haruki Murakami, bell hooks, Jennifer Egan, and David Grann.
Other creators such as graphic designers, comedians, and painters say their material has been used without their permission by other AI creation tools.
Increasing awareness of generative AI issues
Despite the initial wave of love for LLMs, a lot of writers, agencies, and media outlets continue to express concern about them.
In Who’s Afraid of ChatGPT? A Message to Content Writers, Rosie Parry-Thompson, a Digital Creative Executive, explains: “AI digests information that humans have put out there (and soon, it’ll digest information that AIs have put out there). It spits it back out in a new form. In both cases, this information can be wrong, biased, or just plain boring.”
Ian Bogost in The Atlantic describes ChatGPT’s prose as consistently uninteresting, and formulaic in structure, style, and content. This won’t matter to the individuals who care little about quality (and therefore Google rankings) when adding content website, blog, or email template. These same individuals are happy to replace some of the best human content writers with LLMs.
Bogost is forthright with his words: “Perhaps ChatGPT and the technologies that underlie it are less about persuasive writing and more about superb bullshitting. A bullshitter plays with the truth for bad reasons—to get away with something.”
What do you have remaining when you discount ChatGPT’s issues and problems? A dodgy research assistant who it seems is practicing to run for election to public office and in line with today’s political standards lies more than 50 percent of time. But you never know which statement is a lie, and when it isn’t an outright lie, it still main contain inaccuracies. This is a tool that cannot be trusted by anyone who values accuracy and the truth.
ChatGPT doesn’t understand that much
ChatGPT can grasp the context and relationships between words within a sentence, resulting in more coherent and contextually relevant language generation. It is even, apparently, identify emotions in fictional textual scenarios. These are impressive and are considerable advancements for AI. Without doubt, LLMs will continue to advance in their reasoning power.
However, as George Moulos observes in Who’s Afraid of ChatGPT etc, being limited to understanding the context and relationship between words means that observations on situations and/or suggestions for improvement are meaningless until verified by a trustworthy source - and this requires a human.
George points out that AI can only give responses to prompts. It doesn’t understand the deeper meaning behind words, or the experiences words describe. George gives an example: ChatGPT can use the word happy and its opposites, unhappy and sad, in sentences. It can provide users with a definition of sad. But this doesn’t mean ChatGPT understands the complexity of sadness as an emotional state, or its relationship to the world in which humans who feel sad live.
A thorough comprehension of sadness and happiness requires delving into their intricacies, considering their context, and recognizing how they intersect with our experiences. It requires empathy, personal encounters, and a holistic understanding to truly grasp the complexities of these emotions and their interconnectedness with our lives.
Humans need to greet new technology with scepticism first
ChatGPT’s short-comings aren’t the technology’s or OpenAI’s (the company behind it) fault. Marketers and businesses jumped on the LLM-based AI, seizing its potential to generate money from courses, videos, and books about ChatGPT or use the technology to deliver a business service. Plenty of social media users didn’t critically review or fully understand ChatGPT and so spread misinformation about it.
Too often humans become enthralled with new man-made technologies, especially those met with wide-spread, high-profile enthusiasm - even when much of it is hyperbole. Some of us interact with AI tools as if they are human, using words like “please” or “thank you”, or we treat them as friends.
Let’s be honest - the experience of typing onto a blank screen and seeing what appears to be an intelligent and insightful response formulate before our eyes is beguiling and, for some, addictive. Some AI fans even believe ChatGPT understands the world and every conceivable thing in it.
It may never ‘get’ your brand voice
It can take writers or communictions pros a long time to become brand consultants. It’s a complex and demanding field. So if AI gets this right immediately, I’d be surprised. In fact, it doesn’t. That’s why a good editor who knows and understand your brand and business goals must look at all content produced with the help of an AI writing assistant.
But first, what is brand voice? I’m going to answer this because I encounter entrepreneurs and copywriters who don’t know.
Brand voice is the nuance, cadence, and tone that your organization uses to communicate to clients and audience. The goal is to be authentic and engender trust. Because, remember, your brand is what your clients and audience think of you.
Think about people in your life. Those who listen, are open, friendly, supportive, and who do what they say they will do are well-liked. Your brand is the same. Brand identity penetrates all aspects of your communications, as well as branding and marketing materials through brand voice.
It is made up of multiple components, from the words used, the emotion infused into communication, colours, follow-up, and follow through (i.e. do your employees reflect your brand voice?). These things create your brand voice and determine how your audience perceives you.
Your brand voice also differentiates you from competitors (a key part of positioning in marketing).
Is your reader engaged? Do they love your content?
To-date, all the results we have seen from LLMs make it crystal clear that ChatGPT and other AI writing tools don’t produce brilliant content. By that, we mean content that draws a reader in so that are fully focused and keep scrolling (or keep watching or listening), even when their attention could be given to one of any number of things.
LLMs do not have sensory experience of the world that humans possess. Unless you’re a hermit, we all exist in this world with others and thrive on being social to varying degrees. Through our conscious existence we create connections with others, have new experiences, and store important and relevant memories. Professionals take in relevant information and knowledge. It’s this kind of experience that allows us to become wiser and more insightful as we age.
Generative AI needs human editors
This is why an organization, business, or entrepreneur needing unique, powerful, and accurate content need to use human writers and editors who know how to work with AI and submit the correct prompts.
Comments