Note: This post was published first on WritingPursuits.com.
The other night, I was scrolling through Threads and was startled by a post that read—and I will paraphrase—"I won't read any post on here that has art from MidJourney. That's a red flag. I won't have anything to do with folks who cheat by using AI."
Several folks chimed in with very judgmental replies condemning authors who use ChatGPT, expressing their own superiority for never touching it. Absolutely convinced that they would never!
Some authors and artists have taken a hard line against using AI for anything, but overall, I think the hand wringing and judginess is premature.
Danger is inherent to progress.
Before I get started, let me refer you to last week's interview with J. Thorn on his book, Cowriting with ChatGPT as more background for today's topic.
Artificial Intelligence (AI) is my nominee for Buzzword of the Year, and maybe it will be the Buzzword of the Decade.
Many people are afraid of artificial intelligence in all its forms. Admittedly, AI has great potential to be dangerous. After all, the emergence of the internet brought unknown dangers with it. Social media platforms harmed us in ways we couldn't foresee.
Every time a new technology emerges, people get hurt. How many people have died in automobiles? Airplanes? Submersibles?
Is it too soon? Sorry. Hopefully, we can avoid another submersible fiasco.
This is why I'm not stressing over the natural progression of AI. Danger is inherent to progress. The best we can do as we learn about its dangers is to develop safety rails.
Artificial Intelligence was Inevitable
Consider human history, especially history since the Industrial Revolution.
Mechanization was followed by energization, followed by mass communications and digitization of information. In my opinion, each step was inevitable.
Digitization inevitably led to big data which inevitably led to artificial intelligence. If you think of big data as a geode, the kind of rock that's ugly on the outside with pretty crystals on the inside, then artificial intelligence is the machine to crack the data geode and reveal its value.
Authors who are tempted to follow the Luddite path of resistance (e.g., smashing textile machines) need to learn from history; the tech is here to stay and will only grow more sophisticated.
Pandora already opened the AI box; it's too late to stuff the AI troubles and woes back inside.
What about plagiarism?
This is a valid question and top of mind for every author.
First, let's talk about how ChatGPT works in a simplistic way. AI technology like ChatGPT is based on Large Language Models (LLMs) which utilize algorithms to parse trillions of written examples to "learn" how to respond to conversational prompts. It uses math to PREDICT what is required.
It bases its results on the material it has been given to process. The use of copyrighted material to feed the LLMs is being litigated, so stay tuned on that topic.
Is plagiarism possible? Yes, but I think it is statistically improbable. The sheer breadth of sampling that goes into LLMs makes plagiarism an unlikely outcome.
I mean, avoid prompts like "in the style of Famous Author." That's not helpful to you, and its probably not strictly ethical. Your voice is your treasure; don't be a sell out.
Keep in mind that AI returns what it predicts you want to see instead of researched, peer-reviewed, verified information. In fact, it will manufacture information. (This has happened to me in my experiments with using ChatGPT.)
And worse, LLMs could return disinformation, depending on which sources have been fed to it. That's a dark, dystopian thought, isn't it?
There's a Russian proverb that Suzanne Massie made famous during the Cold War: "Trust, but verify."
You are responsible for verifying any information you receive from ChatGPT. It's not built for truth; it can only attempt to parse the information it has.
"Doveryai, no proveryai." (Trust, but verify.)
—Russian proverb made famous by Suzanne Massie
Do not rely on ChatGPT and its kin to return valid research results when asked a search engine question. Also, ChatGPT doesn't cite its sources. It's a predictive algorithm, remember?
What about academic cheating?
Students will take every advantage of any opportunity to reduce the rigor of higher education. Educators must decide how to incorporate AI engines like ChatGPT into academic work. Because it isn't going away. And cheating is a thing.
Photo by JESHOOTS.COM on Unsplash
I expect academic institutions will establish guidelines for transparency and ethical ways to use AI, and I really hope that includes rigorous peer review procedures as well as appropriate citations and disclosure statements.
We want academic researchers to have the best tools available at their disposal, including the tool to crack the geode of big data. But we need transparency.
Google is already working on using AI to predict which information users will find most useful; I expect this trend to continue using AI technology.
What about intellectual property?
MidJourney and other AI image generators are beyond the scope of this discussion. As are audio generators.
Even so, we must all be wary of the potential abuse of our photos, videos, and voices to create false narratives, and this is an extra concern for celebrities.
So, please, don't think I'm all in on AI. I urge you to take a measured, cautious approach without hitting the panic button or passing on unfounded conspiracy theories.
Most folks don't know what to think.
Photo by Hans-Jürgen Weinhardt on Unsplash
Some authors and artists have taken a hard line against using AI for anything; overall, I think the hand wringing by authors is premature. They like to blame ChatGPT for all the chaos at Amazon, but …
Authors were experiencing decreasing sales and reduced visibility and the glut of books on Amazon.com before ChatGPT hack books exacerbated the situation.
ChatGPT is not the cause of the excessive book listings on Amazon; people are.
Transparency is essential.
If you use AI while developing your content, be up front about it.
In "AI for Authors: Practical and Ethical Guidelines" by the Alliance of Independent Authors, ALLi sets forth a new AI stance. I highly recommend this entire article, by the way.
I do not want to duplicate their efforts, but here is a quote from the new guideline for the use of Tools and AI in their ethics statement. It is stated as an affirmation:
"I edit and curate the output of any tool I use to ensure the text is not discriminatory, libellous, an infringement of copyright or otherwise illegal or illicit. I recognise that it is my job to ensure I am legally compliant, not the AI tool or service I use. I declare use of AI and other tools, where appropriate."
That last line is golden: I declare use of AI and other tools, where appropriate. Just speak up. For example, in my post, "Transcript - Storytelling Revolution" | Writing Pursuits, I included the notice: "Created in cooperation with ChatGPT." Be transparent.
Don't be judgy.
Vilifying fellow authors who choose to utilize AI applications is wrong. I wouldn't even bring this up except I have already witnessed this judgy stance on social media.
We don't know what we don't know, and anyway, policing other authors' story development methods is a miserable way to live.
You do you. If you eschew AI, that's your prerogative. If you choose to explore it, send us notes from the frontier. But transparency is essential.
Tips:
Don't be judgmental because we don't know what we don't know.
Be transparent and ethical in your use of AI.
Choose to be a lifelong learner instead of a Luddite.
Verify the information you receive from ChatGPT and other AI engines.
Question of the week: What are your predictions about how authors will use AI in the future?
Resources:
Science Corrects Itself, Right? A Scandal at Stanford Says It Doesn't - Scientific American
How Do DALL·E 2, Stable Diffusion, and Midjourney Work? - MarkTechPost
What are LLMs, and how are they used in generative AI? | Computerworld
AI for Authors: Practical and Ethical Guidelines. Alliance of Independent Authors
Ethical Author Campaign from the Alliance of Independent Authors
Here's a high level summary (Developed with ChatGPT, what else?):
The author draws parallels with historical technological advancements and argues that AI's progression is natural, advocating for the development of safety measures rather than fearing it. The inevitability of AI's development is compared to past industrial revolutions and digitization trends. Plagiarism, academic cheating, intellectual property, and the ethical use of AI in creative fields are explored. The author emphasizes transparency when using AI tools, highlights ethical guidelines by the Alliance of Independent Authors, and encourages open-mindedness and verification when dealing with AI-generated content.