<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=915327909015523&amp;ev=PageView&amp;noscript=1" target="_blank"> Skip to main content
You have permission to edit this article.

Donald Morrison: Who’s afraid of artificial intelligence?

As advancements in technology continue to advance at a rapid pace, the topic of artificial intelligence (AI) has become increasingly prevalent in our society.

How’s that for a killer opening line? Actually, it’s a mess — boring, hackneyed (“prevalent in our society”?) and repetitive (“advancements” that “continue to advance”? What else would they do?).

But I have a fondness for this sentence. It kicks off the first essay I ever commissioned from ChatGPT. That’s one of the world’s newest, most powerful chatbots — AI tools can answer questions, solve problems and compose essays, poems and rap songs, as well as mashups thereof.

So, I asked ChatGPT to write a 500-word column for the Eagle on, of course, the future of artificial intelligence. I thought it might make my job easier. I was wrong.

First, a word about artificial intelligence. The idea of creating nonhuman entities that can think is at least as old as Mary Shelley’s 1818 novel “Frankenstein.” Alan Turing, the famous World War II codebreaker, brought artificial intelligence into the modern era. He also proposed the Turing Test, in which a machine would be deemed “intelligent” if its responses in a conversation were indistinguishable from a human’s.

Artificial intelligence embraces a wide range of talents. Let me quote my commissioned ChatGPT essay: “AI refers to the development of computer systems that are able to perform tasks that would typically require human intelligence, such as learning, problem-solving and decision-making.” If you’ve ever called a customer service hotline, you’ve probably interacted with artificial intelligence. And wanted to strangle it.

Indeed, AI is everywhere. It’s used to trade stocks, detect (and commit) online fraud, translate foreign documents, process job applications, evaluate medical tests, inform bail and parole decisions, even write news articles (though not at the Eagle). Routine stories about company earnings reports, for instance, are now largely the work of robots.

All this is great for worker productivity, though not for workers. A 2016 Oxford University study estimated that 47 percent of jobs might be lost by 2033 to smart machines, algorithms and other forms of artificial intelligence. Sure, some positions will be created as well — say, for coders and tech billionaires. But because AI applications are designed to get increasingly more effective, these jobs may also become redundant.

Chatbots and the like digest mountains of human-generated documents and other data, figure out the patterns therein, and thus learn to improve their performance. But better usually means more like humans, who can be flawed and biased.

A 2016 investigation by ProPublica found that a commonly used AI tool for bail and sentencing recommendations resulted in harsher prison terms for minorities than for whites with the same, or worse, criminal records. And because AI knows only what has happened before, it makes mistakes — often with no human around to catch them.

A more ominous problem is deliberate misuse. During the 2016 and 2020 U.S. elections, Russia attacked the U.S. with armies of AI robots, or “bots,” posting disinformation on social media. China currently uses facial recognition and other AI techniques to keep tabs on its 1.5 billion residents, their whereabouts and their political views.

Meanwhile, AI is being adopted by the world’s militaries to make autonomous weapons and battlefield decisions. What could go wrong?

ChatGPT, available for free on the OpenAI website, doesn’t yet seem smart enough for villainy, deliberate or not. When the column I ordered turned out to be a snoozer, I asked ChatGPT to rewrite it in the style of William Shakespeare. After a few words from “Hamlet,” the chatbot simply inserted “doth” before all the verbs.

So, I doth tried again, this time as James Joyce. ChatGPT opened with a line redolent of “Ulysses,” but then went on just as before. Same for Stephen King, though this time leading with a delightfully spooky paragraph before lapsing into the usual mush.

My ghost writer did, however, seem to be getting better every time, especially when I asked it to recast my essay in the style of rap star Eminem. That one began: “Yo, what’s good? It’s your boy, here to spit some hot fire about artificial intelligence.”

At this rate, we won’t need writers in a few years. That’s the scary thing about AI. It’s designed to beat us in the Turing Test.

Yo, guys, we’re spitting gasoline on the hot fire our own destruction. If we don’t start imposing tougher controls on the use of artificial intelligence, humans will become increasingly irrelevant.

Or as ChatGPT’s Shakespeare might put it: “To be or not to be” isn’t really the question. It’s whether we’ll be around to hear the answer.

Donald Morrison is an Eagle columnist and co-chairman of the advisory board.

The opinions expressed by columnists do not necessarily reflect the views of The Berkshire Eagle.

Get up-to-the-minute news sent straight to your device.