13 April 2023

In an Open Letter, Tristan Harris, et. al. Call for a Pause on the Training of AI Systems More Powerful than GPT-4


We take a look at the recent open letter – with prominent signatories from the world of AI – based on its hard-to-ignore impact.

Ex-Google Design Ethicist Tristan Harris and The Center for Humane Technology

“The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology.”

While we are essentially in agreement with some of the warnings included in the recent “AI Open Letter”, the real headline is the presence of one of the less well-known signatories to the letter. We have been looking for the right context to make sure the OODA Loop community is familiar with Tristan Harris – the Executive Director of the Center for Humane Technology.

An ex-Google employee, Harris and the work of the Center have been vital in framing the negative impacts of social media. OODA Board Member and OODA Network Member Dawn Meyerriecks (in her presentation at OODAcon 2022 , Swimming with Black Swans – Innovation in an Age of Rapid Disruption) highlighted The Center for Humane Technology as an invaluable resource and a “New and Different Partnership Model” that should be on everyone’s radar. The Center provides the following description of their origin story:

Our journey began in 2013 when Tristan Harris, then a Google Design Ethicist, created the viral presentation, “A Call to Minimize Distraction & Respect Users’ Attention.” The presentation, followed by two TED talks and a 60 Minutes interview, sparked the Time Well Spent movement and laid the groundwork for the founding of the Center for Humane Technology (CHT) as an independent 501(c)(3) nonprofit in 2018.

While many people are familiar with our work through The Social Dilemma, our focus goes beyond the negative effects of social media. We work to expose the drivers behind all extractive technologies steering our thoughts, behaviors, and actions. We believe that by understanding the root causes of harmful technology, we can work together to build a more humane future.”

In interviews, one of Harris’ go-to quotes is from Dr. E.O. Wilson as is prominently featured on the Center’s website: “The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions, and godlike technology.”

Harris’ signature on the letter gives the document instant credibility based on his impressive work over the last few years. He is a singular voice in the debate on the future of technology – and he is one of those people who makes behavioral and social psychology concepts really accessible – while tagging on equally accessible ethical arguments for why we should all be really concerned if we do not start now in mitigating the negative unintended consequences of certain technologies, including AI.

Pause Giant AI Experiments: An Open Letter

What will it be like to experience reality through a prism produced by nonhuman intelligence?

In what some are characterizing as “Sudden Alarm”, the open letter released by the Future of Life Institute last month – Pause Giant AI Experiments: An Open Letter – calls “on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.” The letter expresses the following concerns: This is all moving too last;
We don’t like the incentives at play;
We don’t know what we are creating or how to regulate it; and
We need to slow this all down, to give us time to think. To reflect. (1)

Tristan Harris followed up the March 22nd release of the letter with an Op-Ed in the NYT co-authored with Yuval Harari and Aza Raskin: “You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills.” Mr. Harari is a historian and a founder of the social impact company Sapienship. Raskin is the co-founder of the Center for Humane Technology. The essay is littered with provocations:Imagine that as you are boarding an airplane, half the engineers who built it tell you there is a 10 percent chance the plane will crash, killing you and everyone else on it. Would you still board?
What would it mean for humans to live in a world where a large percentage of stories, melodies, images, laws, policies, and tools are shaped by nonhuman intelligence, which knows how to exploit with superhuman efficiency the weaknesses, biases, and addictions of the human mind — while knowing how to form intimate relationships with human beings?
In games like chess, no human can hope to beat a computer. What happens when the same thing occurs in art, politics or religion?
What will it be like to experience reality through a prism produced by nonhuman intelligence?
Large language models are our second contact with A.I. We cannot afford to lose again. But on what basis should we believe humanity is capable of aligning these new forms of A.I. to our benefit?
But there’s a question that may linger in our minds: If we don’t go as fast as possible, won’t the West risk losing to China?

Since its release and as of this writing, the open letter has garnered close to 19,000 signatures. The initial 1000 signatories to the open letter released last week make for an impressive “Who’s Who” of the AI academic and commercial ecosystem, including:
Yoshua Bengio, Founder and Scientific Director at Mila, Turing Prize winner, and professor at the University of Montreal
Stuart Russell, Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook “Artificial Intelligence: a Modern Approach”
Elon Musk, CEO of SpaceX, Tesla & Twitter
Steve Wozniak, Co-founder, Apple
Yuval Noah Harari, Author and Professor, Hebrew University of Jerusalem
Andrew Yang, Forward Party, Co-Chair, Presidential Candidate 2020, NYT Bestselling Author, Presidential Ambassador of Global Entrepreneurship
Jaan Tallinn, Co-Founder of Skype, Centre for the Study of Existential Risk, Future of Life Institute
Max Tegmark, MIT Center for Artificial Intelligence & Fundamental Interactions, Professor of Physics, president of Future of Life Institute
George Dyson, Unaffiliated, Author of “Darwin Among the Machines” (1997), “Turing’s Cathedral” (2012), “Analogia: The Emergence of Technology beyond Programmable Control” (2020).
Victoria Krakovna, DeepMind, Research Scientist, co-founder of Future of Life Institute
Gary Marcus, New York University, AI researcher, Professor Emeritus
Aza Raskin, Center for Humane Technology / Earth Species Project, Cofounder, National Geographic Explorer, WEF Global AI Council
Sean O’Heigeartaigh, Executive Director, Cambridge Centre for the Study of Existential Risk
What Next?

Tech luminaries, renowned scientists, and Elon Musk warn of an “out-of-control race” to develop and deploy ever-more-powerful AI systems.

As reported by Will Knight and Paresh Dave at WIRED:An open letter signed by hundreds of prominent artificial intelligence experts, tech entrepreneurs, and scientists calls for a pause on the development and testing of AI technologies more powerful than OpenAI’s language model GPT-4 so that the risks it may pose can be properly studied.

It warns that language models like GPT-4 can already compete with humans at a growing range of tasks and could be used to automate jobs and spread misinformation.

The letter also raises the distant prospect of AI systems that could replace humans and remake civilization.

Microsoft and Google did not respond to requests for comment on the letter. The signatories seemingly include people from numerous tech companies that are building advanced language models, including Microsoft and Google. Hannah Wong, a spokesperson for OpenAI, says the company spent more than six months working on the safety and alignment of GPT-4 after training the model. She adds that OpenAI is not currently training GPT-5.

The letter comes as AI systems make increasingly bold and impressive leaps. GPT-4 was only announced two weeks ago, but its capabilities have stirred up considerable enthusiasm and a fair amount of concern. The language model, which is available via ChatGPT, OpenAI’s popular chatbot, scores highly on many academic tests, and can correctly solve tricky questions that are generally thought to require more advanced intelligence than AI systems have previously demonstrated. Yet GPT-4 also makes plenty of trivial, logical mistakes. And, like its predecessors, it sometimes “hallucinates” incorrect information, betrays ingrained societal biases, and can be prompted to say hateful or potentially harmful things.

Part of the concern expressed by the signatories of the letter is that OpenAI, Microsoft, and Google, have begun a profit-driven race to develop and release new AI models as quickly as possible. At such a pace, the letter argues, developments are happening faster than society and regulators can come to terms with.

The pace of change—and scale of investment—is significant. Microsoft has poured $10 billion into OpenAI and is using its AI in its search engine Bing as well as other applications. Although Google developed some of the AI needed to build GPT-4, and previously created powerful language models of its own, until this year it chose not to release them due to ethical concerns.

To date, the race has been rapid. OpenAI announced its first large language model, GPT-2 in February 2019. Its successor, GPT-3, was unveiled in June 2020. ChatGPT, which introduced enhancements on top of GPT-3, was released in November 2022.

Recent leaps in AI’s capabilities coincide with a sense that more guardrails may be needed around its use. The EU is currently considering legislation that would limit the use of AI depending on the risks involved. The White House has proposed an AI Bill of Rights that spells out protections that citizens should expect from algorithm discrimination, data privacy breaches, and other AI-related problems. But these regulations began taking shape before the recent boom in generative AI even began.

When ChatGPT was released late last year, its abilities quickly sparked discussion around the implications for education and employment.

The markedly improved abilities of GPT-4 have triggered more consternation. Musk, who provided early funding for OpenAI, has recently taken to Twitter to warn about the risk of large tech companies driving advances in AI.

An engineer at one large tech company who signed the letter, and who asked not to be named because he was not authorized to speak to media, says he has been using GPT-4 since its release. The engineer considers the technology a major shift but also a major worry. ‘I don’t know if six months is enough by any stretch but we need that time to think about what policies we need to have in place,’ he says.

Others working in tech also expressed misgivings about the letter’s focus on long-term risks, as systems available today including ChatGPT already pose threats. ‘I find recent developments very exciting,’ says Ken Holstein, an assistant professor of human-computer interaction at Carnegie Mellon University, who asked his name be removed from the letter a day after signing it as debate emerged among scientists about the best demands to make at this moment.

No comments: