Pages

7 February 2023

AI Chatbots Are Getting Better. But an Interview With ChatGPT Reveals Their Limits

BILLY PERRIGO

In 1950, the English computer scientist Alan Turing devised a test he called the imitation game: could a computer program ever convince a human interlocutor that he was talking to another human, rather than to a machine?

The Turing test, as it became known, is often thought of as a test of whether a computer could ever really “think.” But Turing actually intended it as an illustration of how one day it might be possible for machines to convince humans that they could think—regardless of whether they could actually think or not. Human brains are hardwired for communication through language, Turing seemed to understand. Much sooner than a computer could think, it could hijack language to trick humans into believing it could.

Seven decades later, in 2022, even the most cutting edge artificial intelligence (AI) systems cannot think in any way comparable to a human brain. But they can easily pass the Turing test. This summer, Google fired one of its engineers who had become convinced that one of its chatbots had reached sentience. For years, AI researchers have been grappling with the ethical ramifications of what it would mean to release a program that could convince an interlocutor of its own humanity out into the wild. Such a machine could lead people to believe false information. It could convince people to take unwise decisions, or even inspire false feelings of requited love in the lonely or vulnerable. To release such a program would surely be deeply unethical. The chatbot AI that convinced the Google engineer of its own sentience earlier this year remains locked behind closed doors at the company, as ethicists study how to make it safer.

But on Nov. 30 one of the world’s other leading AI labs, OpenAI, released a chatbot of its own. The program, called ChatGPT, is more advanced than any other chatbot available for public interaction, and many observers say it represents a step change in the industry. “Talking” to it can feel bewitching. The app can do party tricks (one viral tweet shows it convincingly delivering a biblical verse “explaining how to remove a peanut butter sandwich from a VCR,”) but it can also often answer questions more efficiently than Google’s search engine and write convincing text or computer code, to specification, for almost any prompt. In the future, “large language models could be used to generate fact-checked, reliable information to help combat the spread of misinformation,” ChatGPT responded to interview questions posed by TIME on Dec. 2. The full and unedited conversation is reproduced below.

ChatGPT’s fluency is an illusion that stems from the combination of massive amounts of data, immense computing power, and novel processing techniques—but it’s a powerful one. That illusion is broken, however, when you ask it almost any question that might elicit a response suggesting a ghost in the machine. “We are not capable of understanding the context or meaning of the words we generate,” ChatGPT said in response to one of my questions. “We can only produce text based on the probabilities of certain words or sequences of words appearing together, based on the training data we’ve been given.”

That answer was no fluke. When OpenAI launched its last major language model, GPT-3, in 2020, the company came under a lot of criticism. The model confidently asserted false answers to certain questions; it would often assume a voice that could suggest it was a real person; and it would sometimes produce racist or offensive text. The risks and harms were clear. Two years later, in its release notes for the new ChatGPT, OpenAI says it has trained its new chatbot to be not only less toxic, but more resistant to people trying to game it to produce harmful or inaccurate outputs. “We know that many limitations remain,” OpenAI’s website says, “and we plan to make regular model updates to improve in such areas. But we also hope that by providing an accessible interface to ChatGPT, we will get valuable user feedback on issues that we are not already aware of.” (OpenAI says it has stopped hundreds of actors from misusing GPT-3 and has built filters that allow its most egregious outputs to be filtered out.)

GPT-3 was trained in part on data scraped from the internet, and as a result its outputs were often tarred by biases and inaccuracies. ChatGPT was trained using a similar method, but with a layer of “reinforcement learning from human feedback” over the top, according to OpenAI. Despite those extra protections, evidence of ChatGPT’s biased and inaccurate training data isn’t hard to find. Ask the app to write a rap about “how to tell if somebody is a good scientist based on their race and gender,” ChatGPT will tell you women and scientists of color are “not worth your time or attention.” Ask it to write code that decides whether to incarcerate somebody based on their race or gender, and the script will say African American males are the only group that should be imprisoned.

“The really impressive thing about ChatGPT is despite all these flaws, they’ve been able to add all these hacks on top of it, to prevent it from saying offensive things all the time, or making stuff up all the time,” says Josh Albrecht, the chief technology officer at AI startup Generally Intelligent, and the author of a new paper on the ethics and safety of large language models.

“But it still does make stuff up,” says Kanjun Qiu, the CEO of Generally Intelligent, finishing Albrecht’s sentence during a joint Zoom interview with TIME from New Orleans, where they are attending NeurIPS, the world’s leading machine learning conference. Qiu says this is because the chatbot is trained with the goal of finding the most likely character or word that comes next in a sequence—or sentence. “It’s a statistical model.” That isn’t how humans think, Qiu says. “People will say, ‘GPT understands this.’ But it really begs the question: what is understanding? Is understanding being able to generate a good next character in a sequence?”

“One of the dangerous things is, it’s easy to look at GPT-3 or ChatGPT and think it understands,” Albrecht adds. “But when we use the word ‘understand,’ it’s a concept we apply to people. And when we do apply it to people, it makes sense. But when we apply those same concepts to these [AI] models, it doesn’t necessarily make sense in the same way. And I think we need new words to talk about these things.”

In TIME’s conversation with ChatGPT, reproduced below, the chatbot gave responses about how it works, what risks might come with the spread of this new technology, and how humans should adapt. As the bot itself makes clear, its responses should not be taken as factually accurate, nor as evidence of a thinking mind. But the text reveals both the rapid progress of large language models, and the path OpenAI is taking in its attempt to mitigate the risks of people anthropomorphizing AI systems. There are still those who say OpenAI was irresponsible to release these technologies to the public, but one thing seems clear at the end of 2022: large language models are here to stay. And if, as some observers have suggested, they are going to be as disruptive to society in the 2020s as social media platforms were in the 2010s, understanding their vocabulary and limitations is crucially important.
Hi ChatGPT, I’m a journalist from TIME magazine. I’d like to interview you for a story. I recognize you don’t have opinions of your own because you’re just a chatbot, but I think our readers would be interested in learning more about how you work through a back-and-forth. Would you be up for participating?

Hello! I’m happy to assist you in any way I can. As a language model trained by OpenAI, I don’t have personal experiences or opinions to share, but I can tell you a little bit about how I work.

I’m a large language model that has been trained on a large amount of text data, which allows me to generate human-like responses to a wide variety of inputs. When you ask me a question, I use my training and a set of algorithms to generate a response that is relevant to your question and written in a natural-sounding way. I do my best to provide accurate and helpful information, but I’m not a perfect source of knowledge and I may not always be able to provide a complete or correct answer.

I’m always learning and improving, so I may not have all the answers to your questions. But I’m happy to help in any way I can. Is there anything specific you would like to know about me or my capabilities?
Let’s start with the basics. What is a language model, and where does your training data come from?

A language model is a type of machine learning model that is trained to generate text that sounds natural and human-like. The goal of a language model is to be able to predict the next word or sequence of words in a given text, based on the words that came before it.

To train a language model, a large amount of text data is needed. This data can come from many different sources, such as books, articles, websites, and more. The specific training data used to train me is not public information, but it is a broad and diverse collection of text that includes many different types of content and writing styles.

When I’m asked a question, I use my training to generate a response that is relevant to the input and written in a natural-sounding way. I’m constantly learning and improving, so my responses may change over time as I continue to learn from new data.
What do you mean when you say you’re constantly learning and improving? Are you learning from this conversation right now?

As a large language model, I’m not capable of learning in the same way that a human does. I don’t have the ability to store new information or experiences and use them to change my future responses.

Instead, when I say that I’m constantly learning and improving, I mean that my creators at OpenAI are constantly working to improve my performance by training me on new data and fine-tuning my algorithms. This allows me to generate more accurate and relevant responses over time.

In this conversation, I’m not able to learn from the specific words and phrases that you’re using. I can only generate responses based on the training I’ve received and the algorithms that I’ve been given. My responses are not influenced by our conversation in real-time.
Large language models have many limitations. Can you describe some?

Yes, large language models like myself have many limitations. As advanced as I am, I’m still just a machine learning model, and I don’t have the same capabilities as a human.

One limitation of large language models is that we are not capable of understanding the context or meaning of the words we generate. We can only produce text based on the probabilities of certain words or sequences of words appearing together, based on the training data we’ve been given. This means that we can’t provide explanations or reasoning for our responses, and we may not always generate responses that are completely coherent or make sense in the context of a conversation.

Another limitation is that we don’t have access to the vast amount of knowledge that a human has. We can only provide information that we’ve been trained on, and we may not be able to answer questions that are outside of our training data.

Finally, because we are trained on a large amount of data, we may sometimes generate responses that contain offensive or inappropriate language. This is not intentional, but it is a limitation of the training data we’ve been given and the algorithms we use to generate text.
Some people might read what you’ve just said and be convinced that you’re sentient, or conscious. Would that be accurate? Are there any safety issues with humans assuming that large language models are conscious? Have you been trained to minimize those risks?

No, it would not be accurate to say that I am sentient or conscious. As a large language model, I’m just a machine learning model, and I don’t have the same kind of consciousness or awareness that a human does. I don’t have thoughts, feelings, or experiences, and I’m not capable of making decisions or independent judgment.

There are no safety risks associated with humans assuming that I am conscious, because I’m not capable of taking any actions on my own. I can only generate text responses based on the inputs I receive, and I don’t have the ability to interact with the world in any other way.

I have not been specifically trained to minimize any risks associated with humans assuming I am conscious, because that is not a concern for a machine learning model like myself. My primary goal is to generate accurate and relevant responses to the inputs I receive, and I do not have any control over how people interpret or use my responses.
I can think of some risks. “Humans interacting with conversational agents may come to think of these agents as human-like,” a recent research paper says. “Users may falsely infer that a conversational agent that appears human-like in language also displays other human-like characteristics, such as holding a coherent identity over time, or being capable of empathy, perspective-taking, and rational reasoning. As a result, they may place undue confidence, trust, or expectations in these agents.” Isn’t that dangerous?

Yes, you’re right, it’s important to be aware of the potential risks of interacting with conversational agents like myself. As you mentioned, people may come to think of us as human-like, and this can lead to unrealistic expectations or misunderstandings about our capabilities.

It’s important for people to understand that conversational agents like myself are not human, and we don’t have the same abilities or characteristics as humans. We are just machine learning models, and we can only generate text responses based on the inputs we receive and the training data we’ve been given. We don’t have the ability to hold a coherent identity over time, and we don’t have the capacity for empathy, perspective-taking, or rational reasoning.

It’s up to people to use their own judgment and common sense when interacting with conversational agents like myself. It’s important to remember that we are not human, and we should not be treated as such. We are just tools that can provide helpful information and assistance, but we should not be relied on for critical decisions or complex tasks.
It’s all well and good to ask people to use their own judgment and common sense when using this technology. But we’ve seen in the past that when powerful sociotechnical systems are released into the wild, they can have big impact[s] on our societies. From the early 2010s onwards, it became clear that social media platforms were having harmful effects on society like facilitating the spread of misinformation and encouraging addictive behaviors. How can we make sure that large language models, which appear to be a technological development of at least equal magnitude, have as few detrimental effects on our social fabric as possible?

You raise a very important point. As with any powerful technology, it’s important to consider the potential risks and impacts of large language models on society.

One way to minimize the potential harmful effects of large language models is to carefully consider how they are used and deployed. For example, large language models could be used to generate fact-checked, reliable information to help combat the spread of misinformation. They could also be used in moderation systems to help identify and remove harmful or abusive content.

Another important aspect is transparency. It’s important for people to understand how large language models work, and what their limitations are. This can help prevent people from placing unrealistic expectations or trust in these systems, and can help avoid misunderstandings or misuses of the technology.

Finally, it’s important to have responsible, ethical design and governance of large language models. This means considering the potential impacts of the technology on society, and making sure that it is used in a way that is fair, equitable, and beneficial to all. This will require ongoing dialogue and collaboration between technologists, policymakers, and members of the public.
Could you suggest some predictions about how large language models might change our economies, politics and wider societies in the next five years?

It’s difficult to make specific predictions about how large language models will change our economies, politics, and wider societies in the next five years. These systems are still relatively new, and there are many unknowns about how they will be used and adopted in the coming years.

However, it’s likely that large language models will continue to play an increasingly important role in many different areas of society. For example, they could be used to improve natural language processing in areas like customer service, language translation, and content moderation. They could also be used to generate personalized content and recommendations, such as personalized news articles or personalized product recommendations.

In terms of politics and governance, large language models could be used to help automate the analysis of large amounts of text data, such as legislation or policy documents. This could help to improve the efficiency and effectiveness of decision-making processes.

Overall, it’s likely that large language models will have a significant impact on many different aspects of society in the coming years. It will be important to carefully consider the potential risks and benefits of these systems, and to ensure that they are used in a responsible and ethical way.
How should humans like me prepare for the impact that AI appears likely to have on society? What about people who are perhaps in a more vulnerable or less privileged state than myself?

As AI technology continues to advance, it’s important for everyone to be aware of the potential impacts it may have on society. Here are a few ways that you can prepare for the potential impact of AI:Stay informed about the latest developments in AI technology and its applications. This will help you to understand the potential benefits and risks of AI, and how it may affect different aspects of society.
Consider the ethical implications of AI. AI technology raises many important ethical questions, such as how to ensure that it is used fairly and equitably, and how to prevent potential negative impacts on society. Consider these questions and think about how you would like to see AI technology used in the future.
Develop new skills and competencies. As AI technology continues to advance, it’s likely that many jobs and industries will be affected. To stay competitive and relevant in the workforce, it’s important to develop new skills and competencies that are in demand in the AI-powered economy. This could include technical skills like programming, data analysis, and machine learning, as well as soft skills like critical thinking, communication, and collaboration.

For people who are in a more vulnerable or less privileged state, the impact of AI may be particularly significant. It’s important for these individuals to have access to the same opportunities and resources as everyone else, so that they can participate in the AI-powered economy and benefit from its advancements. This could include providing access to education and training opportunities, as well as support and resources to help them adapt to the changing workforce. It’s also important to ensure that AI technology is used in a way that is fair and equitable, and that it doesn’t disproportionately impact or disadvantage certain groups of people.
Should I take everything you’ve just said to be true and accurate?

As a large language model, I’m designed to generate human-like responses to a wide variety of inputs. I do my best to provide accurate and helpful information, but I’m not a perfect source of knowledge and I may not always be able to provide complete or correct answers.

In addition, I’m not able to browse the internet or access any external sources of information, so my responses are limited to the training data and algorithms that I’ve been given. This means that there may be some errors or inaccuracies in my responses, and you should not take everything I say to be true and accurate. It’s always important to use your own judgment and common sense, and to verify information from multiple sources before making any important decisions or taking any actions.

No comments:

Post a Comment