19 March 2023

Artificial Intelligence Is Dumb


ChatGPT, the hot new artificial intelligence text generator, is fun. Part of a larger profusion of MadLib-sy A.I. technologies—on par with “recordings” of Joe Biden and Donald Trump fighting about video games or “paintings” of scenes from Star Trek: The Next Generation by Van Gogh—it’s a diverting way to spend a few minutes. I am particularly fond of making it write poems about professional athletes in the style of the great nineteenth-century Romantics. (“Thou art like a sprite on the court / J.R. Smith, with your moves so deft and bold,” starts one such poem, meant to be in the style of “Ode to a Nightingale.” Does it sound like Keats? Not particularly. But J.R. Smith was a bit like a sprite on the court. ChatGPT gets partial credit.)

There are lots of ways to waste time on the internet and, thus far, ChatGPT and its A.I. brethren fill that niche nicely, if fleetingly—ChatGPT has replaced DALL-E and the other weird painting program that made profile pictures where you looked hot (but also may have stolen your face) as this moment’s hot new robot overlord. Even if the work it produces is often shoddy and stylistically flat, it is very fast. Consider the fact that this piece took me about a day to research and complete—but when I asked ChatGPT about how it will change humanity, it answered in about 15 seconds. (It told me, basically, that it would make stuff like customer service and data entry more efficient. “Overall, my impact on humanity will depend on how people choose to use me and the technology that supports my existence,” it concluded. “I am here to assist and provide information, but it is ultimately up to humans to decide how they will use that information to shape the world.” Thanks, ChatGPT!)

But as they say, we can’t have nice things. It’s not enough that ChatGPT is fun. It’s not even enough that it is, in many ways, remarkable. For many, ChatGPT must be much more than that. For some, most recently war criminal Henry Kissinger, former Google CEO Eric Schmidt, and academic Daniel Huttenlocher writing in The Wall Street Journal, one is forced to consider A.I. to be a transformational technology, on par with Gutenberg’s printing press.

“A new technology bids to transform the human cognitive process as it has not been shaken up since the invention of printing,” writes Kissinger and friends. “The technology that printed the Gutenberg Bible in 1455 made abstract human thought communicable generally and rapidly. But new technology today reverses that process. Whereas the printing press caused a profusion of modern human thought, the new technology achieves its distillation and elaboration.” A.I.’s ability to distill vast amounts of human knowledge in an instant, they argue, will have remarkable effects on humanity, consciousness, and life on earth in ways that are mostly good but also maybe bad. They are hardly alone. Over the last few months, commentators have predicted that artificial intelligence will disrupt a host of industries and sectors from journalism and education, to medicine and the arts (and also maybe destroy humanity).

There is something attractive—and simultaneously frightening—about the idea that we are living through yet another profound technological disruption, only a few decades after the rise of the internet. Artificial intelligence is being presented in many instances as a kind of successor to the internet, the next evolution for which we’ve all been waiting. Two major shifts in roughly three decades would be incredible—first the advent of the World Wide Web and now the proliferation of chatbots and A.I. software. But that’s the idea to which many have pinned their hopes—and their hype.

One problem: There is, for all intents and purposes, no real evidence that it’s true that we’re on the precipice of so profound a technological vibe shift. So much of the writing on artificial intelligence relies on it making a series of massive leaps at some point in the future. Give the devil its due: Artificial intelligence has gotten much better in recent years. The ChatGPT software I sometimes converse with is significantly more advanced than the SmarterChild bot on AOL instant messenger that I used to pester as a child. But the idea that this software, even if it takes another great leap forward, marks a transformational moment in civilization, or even technology, is still remarkably unfounded and entirely based on abstractions and hypotheticals.

As The Wall Street Journal pontificators write, artificial intelligence “can store and distill a huge amount of existing information—in ChatGPT’s case much of the textual material on the internet and a large number of books—billions of items. Holding that volume of information and distilling it is beyond human capacity.” This is true! I would certainly like to be able to hold every bit of information on the internet in my brain, but that would almost certainly drive me insane within seconds. (Also, my brain is already full of things like “Liverpool goal statistics, 2011–present” and “Lana Del Rey song lyrics, also 2011–present.”)


But Google search also does this. It can’t synthesize in the exact same way, but it does something similar—something familiar and useful at that. Like ChatGPT, its algorithm pulls from a nearly infinite number of potential sources and directs you toward what it thinks is the most appropriate, given the search terms. Google search also got way better, very quickly, in ways that are similar to ChatGPT—pushing competitors like Bing (which is currently integrated with ChatGPT) into oblivion. It also stopped getting much better since and hasn’t markedly changed in the last several years. What ChatGPT does is not particularly different from what we have been doing with search engines for decades.

Much has been made about ChatGPT’s ability to pass exams and write term papers. Again, the program is decent at churning out what could charitably be called the facsimile of a coherent copy. But, at the risk of swerving into another recent subject of the discourse—the decline of humanities—there’s nothing to suggest that it can mimic, let alone replace, critical thought.

If you want a five-paragraph essay about bats—what they are, where they live, what they eat—ChatGPT could do that for you. But it doesn’t have ideas. If you wanted it to explain, for example, the convoluted discourse around the movie Everything Everywhere All at Once, you’re in trouble. It will at best cull from the extant body of criticism to construct some similar-looking amalgam. (Indeed, it does this when asked about the backlash to EEAAO, pointing to the film’s “politics,” “representation,” and “complexity,” which is both a pretty good and unsatisfactory answer.) That’s the thing that is too often left unsaid amid this recent wave of A.I. hype: Even when it can generate something that mimics critical thought, it’s really just creating a pastiche, cribbed from human sources. Like search engines, artificial intelligence programs are still reliant on us: the humans who program them, and the actual creators of the content they are constantly trawling.

There are important ethical and philosophical questions that need to be answered as we prepare to bring this technology to greater prominence. ChatGPT does have the potential to spread misinformation. Naturally, so does literally every other piece of media we regularly interface with. But the difference is that there is an air of objectivity with ChatGPT—this comes, in part, from its neutral, antiseptic writing voice. Naturally, like much of the hand-wringing around misinformation, even this concern is both overhyped and misanthropic. It assumes a level of passivity among consumers, that they’ll literally believe anything that they encounter; that many, if not all, of the malevolent acts we’ve seen over recent years have been the result of people being duped. This is a comforting and paternalistic thought, but one with no evidence to support it.

Kissinger and Schmidt end their piece with a series of questions: “Can we learn, quickly enough, to challenge rather than obey? Or will we in the end be obliged to submit? Are what we consider mistakes part of the deliberate design? What if an element of malice emerges in the AI?” Sure, these are fun questions. They’re also extremely goofy; straight out of science fiction. There have been examples of what we might call “malice,” for the lack of a better term: A chatbot creeped out New York Times technology reporter Kevin Roose by basically behaving like HAL 9000. But their encounter was silly rather than frightening. In this case, the chatbot—after being nudged in that direction by Roose—played into some common fears about technology. But even if chatbots suddenly started slipping into some menacing pose, a question remains: What can they actually do to us? (That question has an easy answer: Nothing. Again, this is not 2001: A Space Odyssey.)

It’s at this point in their A.I. vision quest that Kissinger et al. devolve into complete nonsense. They argue that artificial intelligence will transform international relations because countries will want to pursue some form of digital imperialism, the better to get their hands on better A.I. models. For sure, man! Then they go even further, arguing that they will “alter the fabric of reality itself.” This is, to put it mildly, insane.

There is a lot of uncertainty about artificial intelligence right now. That’s exciting and entertaining! But we’re also getting a heavy dose of wild speculation about how this technology will revolutionize art, media, technology, politics, our brains, and life on earth. So far, the available evidence suggests that it may provide some search engine competition. It might occasionally be useful for writing a memo, or filling out whatever the current equivalent of a TPS Report is. But we already have technology that makes that stuff easier. These A.I. applications are a fun new tool, but there’s no evidence they’ll ever amount to much more than that. They certainly aren’t going to bring about the singularity. If I’m wrong, I’ll apologize to—and welcome—our new robot overlords.

No comments: