26 May 2023

AI Unleashed: Techno-Sovereignty and the Rules Deficit

Robert A. Manning

The development of artificial intelligence (AI) is racing ahead of rules to control the use of this powerful new technology. While the tech revolution advances exponentially, governance tends to advance linearly and incrementally, at best. The remarkable leap in AI, as displayed by the capabilities of Open AI’s ChatGPT4 — from writing poetry and offering therapy, to passing Bar exams — highlights just the beginning of the AI tech-rules gap.

THE RED CELL PROJECT

The Red Cell was a small unit created by the CIA after 9/11 to ensure the analytic failure of missing the attacks would never be repeated. It produced short briefs intended to spur out-of-the-box thinking on flawed assumptions and misperceptions about the world, encouraging alternative policy thinking. At another pivotal time of increasing uncertainty, this project is intended as an open-source version, using a similar format to question outmoded mental maps and “strategic empathy” to discern the motives and constraints of other global actors, enhancing the possibility of more effective strategies.

Although the technology is still in its early stages, Open AI, Google, and a host of other Big Tech companies and startups are intensifying their efforts to commercially deploy not just the best artificial intelligence (AI) chatbot — fueled by $5.9 billion in venture capital since 2022 — but also ever smarter AI. Their ultimate goal is to create and market AGI – Artificial General Intelligence — which, as one prominent technologist and venture capitalist anxiously described it, is:

“A superintelligence computer that learns and develops autonomously, that understands its environment without the needs for supervision and that [can] transform the world around it, with human-level cognition that can solve problems.”

AGI has been described as “God-like,” but it is modeled on the human brain. Experts still do not know exactly how the brain and its 86 billion neurons work. Thus, some observers doubt that the development of AGI will be possible — and mock concerned individuals as “Doomers” — though AGI is generating growing investment. Some technologists think that development of AGI is anywhere from five years to many decades away. But ChatGPT4 and the like have advanced much sooner than many expected. In any case, big questions about effective governance or the impact of AGI on humans have yet to be duly considered.

This predicament, long anticipated by technologists and science-fiction writers, reflects the technological imperative that because something can be invented and commercialized, it should be developed, as well as a culture that believes that “tech will save the world.” These factors have driven decades of Silicon Valley innovation — now powered by Big Tech, whose slogan is “move fast and break things.” This approach sometimes downplays potential risks. The “move fast” mentality produced the stunning achievements of the digital world. But the flip side is software with hackable flaws that need to be patched later. This is the corollary to the “move fast” imperative: “fix later.”

AI is qualitatively different than laptops and iPhones, as well as Windows or Word. Its spread requires a paradigm shift. The AI experiment has spooked Henry Kissinger, who fears it will destroy reason — the means by which humans have understood the world since the Enlightenment — and alter human cognition.

One measure of the gravity of the issue: the AI race has deeply divided the AI community in Silicon Valley. Most dramatically, in March, top technologists like Elon Musk and Apple cofounder Steve Wozniak, as well as over 1,000 leading tech mavens, signed a controversial open letter that called for a moratorium on further AI research and development, warning that AI developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control.”

AI combines Big Data with algorithms that are ever more complex. It is, at present, less a “thing” than an enabler, like electricity — and, despite the hype, is still in its developmental stages. Supercomputers have increased their capacity a hundredfold during the past decade. The future will be “AI plus everything”: It will be applied to industrial production; agriculture; legal research; medical diagnosis; and biosciences, for example, in enabling CRISPR gene-editing and the incredible pace of new vaccine development. AI is already judging job and college applications. Can it determine whether an applicant is or is not what their resumes or test results suggest? Can it judge character or flawed personalities as might a human interviewer?

Consider how the internet changed the world, then multiply times ten. Recall that the internet’s creators saw it as an unalloyed good — opening up instant global flows of information. The insidious social media, hacking, the dark web, cybercrime, and the rise of Big Tech were not on anyone’s radar. The same vision of transforming the world drives AI developers.

So wondrous has been the potential for machine intelligence that big questions have been set aside until recently. Only now has the urgency of a serious discussion on AI gained public attention: Will it be manipulated as a fount of disinformation? Will AI intensify the “surveillance state”? Will it destroy jobs? Will autonomous weapons start wars or make warfighting decisions on their own? Will it replace humans?

These unanswered questions begin to reveal the magnitude of the risks ahead. Thus, it is not surprising that Sam Altman, the CEO of OpenAI, which created the ChatGPT4, says, “I’m a little bit scared of this.” Both OpenAI and Google reportedly overruled their own safety officials and ethicists in the rush for first-mover advantage.
GOVERNANCE NOT CATCHING UP

This begs the larger question: Why is the development of AI racing ahead of rules and guardrails? The United States still lacks even a national data privacy law. One explanation is the power of Big Tech. This situation today is not entirely unlike the old British East India Company, which acted as a sovereign state acquiring colonial territories in the name of the queen with little oversight or thought of how they would be governed. The authorities in London, meanwhile, did not understand how Britain’s role would change; in the end, its new empire dominated its international relations for two centuries afterward.

Big Tech has an army of lobbyists fending off governmental oversight owing to fear of slowing the pace of innovation and threatening that any delay would give an advantage to China. The risks of “God-like AI” are an order of magnitude greater than those of the internet. Laissez-faire policies and U.S. political dysfunction are not up to the challenge. After endless hearings and briefings, Congress has yet to pass either antitrust laws or governance for emerging technologies.

One irony is that a broad consensus on core AI ethical principles has existed since at least 2020. The academic, science, and technology communities, along with the tech industry, have considered AI ethics and developed codes of ethics. In the United States, the Asilomar Principles — endorsed by more than 2,500 AI and robotics scientists, engineers, and tech developers — were issued in 2017. Microsoft and Google issued AI ethics principles in 2018 (though they are not strictly followed). Similarly, the Organisation for Economic Co-operation and Development (OECD) and European Union (EU) promulgated principles for a trustworthy AI in 2018, and China issued its AI principles in 2019. The G-20 adapted principles based on the OECD in 2019, as did the U.S. Department of Defense, on semi- and autonomous weapons.

The numerous sets of AI principles vary in emphasis and detail, but they all share some basic qualities:Human agency and benefit: Research and deployment of AI should augment human well-being and autonomy, have human oversight to choose how and whether to delegate decisions to AI systems, and be compatible with human values.
Safety and responsibility: AI systems should be technically robust; based on agreed standards; and verifiably safe, including resilience to attack and security, reliability, and reproducibility.
Transparency in failure: If an AI system fails or causes harm or otherwise malfunctions, it should be clear why and how the AI made its decision (this is also known as algorithmic accountability).
Decisions on lethal use of force should be human in origin; an arms race in lethal autonomous weapons should be avoided.
Periodic review: Ethics and principles should be periodically reviewed to reflect new technological developments, particularly in general deep learning AI.

Nevertheless, there is no comprehensive set of laws and regulations to translate such principles into operational rules for AI applications. The EU has put forward a complete legal and regulatory framework for AI applications (now being legislated in the EU Parliament), as has China. But the United States lags behind. The White House has released a “blueprint for an AI Bill of Rights.” It has some provisions similar to the EU regulations, but it is much broader and is merely a proposal. In lieu of a comprehensive set of laws and regulations, the Biden administration is tasking key agencies – Commerce, the Federal Trade Commission, and the National Institute of Standards and Technology (NIST) to take executive action to mitigate potential harm and devise best practices for AI.
NEEDED GUARDRAILS

Looking ahead, the speed of decisions made by increasingly smart AI and autonomous weapons will make it very difficult to achieve the goal of most AI ethics proposals: have a human in control. But if a human does not understand why AI has decided to take an action or how it arrived at a decision, disaster could result. Rich literature is available on how complex systems fail.

Such realities suggest that a pause may be wise. President Biden has launched new efforts to devise policy measures to define AI uses and abuses regarding safety and privacy, and the White House announced that major AI developers such as Google and Open AI committed to participate in a public evaluation of AI systems. These overdue steps reflect the anxiety among some leading AI developers, the crescendo of which is the recent resignation of Geoffrey Hinton, considered the Godfather of AI, from Google over fears of dangers ahead. But AI is dynamic and requires long-term measures. Congress needs to pass laws for data privacy and AI applications; and globally, the G-7 and/or G-20 to must build consensus on how to regulate AI. Japan, host of the upcoming G-7 Summit, plans to address AI; that is perhaps a good place to start. A presidential oversight commission of technologists, venture capitalists, academics, and politicians is another initiative to consider. And, globally, an urgent exercise under UN auspices could gather scientists and tech engineers to determine how AI knows what it knows — and if that is unknowable, how to define human control of AI. Absent global rules, the world risks a race to the bottom and potential catastrophe.

No comments: