Robert A. Manning, and Ferial Ara Saeed
The development of artificial intelligence cannot be stopped. Yet, that doesn’t mean it cannot be realigned with human-centered priorities. The Red Cell series is published in collaboration with the Stimson Center. Drawing upon the legacy of the CIA’s Red Cell—established following the September 11 attacks to avoid similar analytic failures in the future—the project works to challenge assumptions, misperceptions, and groupthink with a view to encouraging alternative approaches to America’s foreign and national security policy challenges. For more information about the Stimson Center’s Red Cell Project, see here.
The belief in the inevitability of artificial intelligence (AI), the promise of boundless benefits, and the fear of losing to China or each other are driving American AI industry rivals into a furious race for AI dominance. This race is accelerating with insufficient regard for the risks, many of which even AI creators and corporate leaders have warned about themselves. Concerns about safety, human impact, and the inability to prevent catastrophic outcomes have been downplayed in the pursuit of speed and supremacy. The challenge is not to stop the train or even slow it down—rather, it is to shift direction to ensure that innovation remains aligned with human objectives.
“We are building a brain for the world,” explains OpenAI CEO Sam Altman in a blog post about artificial superintelligence, defined as AI that exceeds human intelligence. Robots could one day run entire supply chains by extracting raw materials, driving trucks, and operating factories. Even more remarkably, certain robots can manufacture additional robots to construct chip fabrication plants, data centers, and other infrastructure. Machines will not just power the future; they will build it—indefinitely.
No comments:
Post a Comment