AI Series Part II
By Bob Deakin
Is it real or is it AI? You enjoyed that story online about weather patterns, but was it written by a human? It’s unfortunate we have to ask, but unfortunately we do. Like corruption in government or PEDs in sports, it’s always possible results are achieved through illusionary means. Isn’t that the definition of artificial intelligence?
How do we tell if (let’s focus on written) content is generative AI? As with a crime, consider the motive. Is there an advantage to generating it artificially? Writing, editing, and research take time, and time is money. If someone is understaffed or overworked, AI-generated content is a time saver.
Here’s How It Goes
Enter a few words of info about your topic into an AI writing tool such as CopyAI, Rytr, or Simplified and in seconds you have fully formed written content. It may include bullet lists, links, and even citations. It looks authentic and may even give you ideas with which to continue.
This is where AI is a writing tool. Many writers use AI to give them ideas. It is encouraged by employers who don’t want them to spend too much time thinking. As long as the suggested copy is not used verbatim, it is merely inspiration.
If the copy is used word-for-word, it’s perfectly legal but perilous. For social media posts or chatbots, AI suffices for those focused on quantity over quality. For content created to inform or entertain such as blogs, stories, or books - not so fast.
Generative AI is a blanket phrase for AI models that generate content. LLMs (Large Language Models) are a form of generative AI used to create written content with the ability to “understand” with subsets such as ChatGPT or BERT.
If AI Generated It, Someone Wrote It
Imitation is the sincerest form of flattery. That phrase is attributed to Oscar Wilde. Whether he actually said it doesn’t matter. At least it’s attributed to someone, which is more than AI generators are going to do for you. When I said ‘perilous’ I was referring to potential plagiarism or copyright infringement.
Earlier this year I was testing an AI generator when it produced this passage:
“big-budget mind-blowing experiences to remember.” It looked familiar. I pasted it into Google and what do you know? Bob Deakin wrote it a year previously. What?
It was a line from a blog I wrote about XOps. For the purpose of the AI generator test, I entered a few words about the pros and cons of AI worldwide and I got my own copy. I could have plagiarised myself.
What are the odds of plagiarising? Apparently not bad. I learned an early lesson about AI, which is not to trust it for original copy. That said, each of the AI generators I tested has its own plagiarism tester and makes it clear never to copy and paste content directly.
How Do I Detect AI-Generated Content?
As of yet, there is no foolproof “AI-Dar” to sense deception. There are tell-tale signs such as sentences consistently uniform in length, repetition, and lack of personality. However, experienced AI users easily cover their tracks.
“Current detectors of AI aren’t reliable in practical scenarios,” says Soheil Feizi, an assistant professor of computer science at UMD from a news release earlier this year. “There are a lot of shortcomings that limit how effective they are at detecting. For example, we can use a paraphraser, and the accuracy of even the best detector we have drops from 100 percent to the randomness of a coin flip.”
Paraphraser.IO is an example of one of the many AI-based paraphrasers available. So how do I detect AI-generated content? If you're asking, you're on the right track. Don't give up yet. I'll get to that on Tuesday.
In Part III, I explore AI generators further, as well as the tools to detect them.
Comments