20 February 2026

Promoting Stability on the Path to Superintelligence

Michael Mazarr

As the world confronts the prospect of AI superintelligence—highly advanced AI capable of performing thousands of functions as well as or in some cases dramatically better than humans—the question of strategic stability during the transition period is becoming a real concern. If either the United States or China thinks that the other is on the verge of AI superintelligence, it may fear being subjugated in economic, technological, and military terms, or that superintelligence once unleashed will pose myriad threats to humanity. Great powers facing such threats have sometimes lashed out in preventive wars.

The obvious risk in such a power-rearranging transition is that the race to superintelligence could prompt instability and conflict. Probably the most widely-discussed diagnosis of and prescription for this challenge is the concept of “mutual assured AI malfunction,” or MAIM, proposed in the 2025 report Superintelligence Strategy by Dan Hendrycks, Eric Schmidt, and Alexander Wang. (Hendrycks and Adam Khoja later added a response to critics.) They argue that superintelligence could pose multiple dangers. “In the hands of state actors,” they argue, “it can lead to disruptive military capabilities and transform economic competition. At the same time, terrorists may exploit its dual-use nature to orchestrate attacks once within the exclusive domain of great powers. It could also slip free of human oversight.” As superintelligence looms on the horizon, great power relationships could become more volatile.

No comments: