Pages

10 March 2023

US and China can show world leadership by safeguarding military AI

Paul Scharre

The recent balloon incident highlights the fragility of US-China relations and the risk of accidents and miscalculation. While balloons are a 200-year-old technology, the United States and China are developing new technologies that come with new risks. Chief among these is artificial intelligence (AI), which has many military applications but also can lead to accidents or humans overtrusting in machines.

The hasty deployment by Microsoft and Google of chatbots like ChatGPT demonstrates the risks of moving too quickly with an unproven technology. Competitive pressures in the private sector have led tech companies to race ahead to field AI systems that are not safe. Nations must avoid similar temptations with military AI.

Unsafe military AI systems could cause accidents or give humans faulty information, exacerbating tensions in a crisis. China and the United States must develop confidence-building measures to reduce the destabilising risks of military artificial intelligence.

Both American and Chinese defence thinkers have publicly acknowledged the risks of military AI competition and the need to explore avenues of cooperation. In a 2021 article, Li Chijiang, secretary general of the China Arms Control and Disarmament Association, warned of the “severe challenges that artificial intelligence military applications pose to international peace and security” and called for nations to “jointly seek solutions”.

In a 2020 New York Times article, People’s Liberation Army Senior Colonel Zhou Bo called for the US and China to adopt confidence-building measures, including in AI. Zhou cited the 1972 US-Soviet Incidents at Sea Agreement as inspiration.

Zhu Qichao, a leading Chinese thinker on military AI, wrote in a 2019 article with Long Kun: “Both sides should clarify the strategic boundaries of AI weaponisation (such as whether AI should be used for nuclear weapons systems), prevent conflict escalation caused by military applications of AI systems, and explore significant issues such as how to ensure strategic stability in the era of AI.”
A robotic dog at the Responsible Artificial Intelligence in the Military (REAIM) summit in The Hague, Netherlands, on February 15. Photo: Reuters

Li Bin of Tsinghua University, one of China’s pre-eminent scholars on arms control and strategic stability, said at the 2019 World Peace Forum in Beijing: “In the Cold War, the United States and the Soviet Union developed a lot of cooperation on nuclear weapons issues. I understand that right now we have a lot of problems … but that should not stop us from developing real, practical cooperation among countries.”

Multiple US experts have similarly called for confidence-building measures to reduce AI risks.

Both governments have formally spoken out on AI risks and the need for states to adopt measures to mitigate risks. In a 2021 position paper issued at the United Nations, the Chinese government stated: “We need to enhance the efforts to regulate military applications of AI”. The paper argued for the need to consider “potential risks” and highlighted “global strategic stability” and the need to “prevent [an] arms race”.

In February this year, the US State Department issued a “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy”, calling for states to adopt “best practices” in military AI use. These unilateral statements are valuable measures that increase transparency and mutual understanding. But they are not enough.

Israel installs AI-powered gun at West Bank checkpoint as riot control

The US and China must move beyond unilateral statements and begin developing shared confidence-building measures to manage the risks of military AI competition.

Confidence-building measures can include increased communication and information-sharing among countries, inspections and observers, “rules of the road” for military operations, and limits on military readiness or activities. Such measures have been successful in reducing instability in the past.

The 1972 US-Soviet Incidents at Sea Agreement was highly successful in reducing dangerous incidents, such as collisions or near collisions, between US and Soviet planes and ships.

China and India have several agreements in place to manage tensions on the border, and while these have not entirely prevented border clashes, they may have helped reduce violence and prevent unwanted escalation.

There are many promising opportunities for US-China cooperation on reducing military AI risks. The two nations could clarify the need for rigorous testing of AI systems before fielding to ensure their safe operation and avoid accidents.

Building on US statements in the 2022 Nuclear Posture Review, China and the US could, along with other nuclear states, agree to ensure positive human control over nuclear weapons and keep humans in the loop for nuclear launch decisions.

And taking a page from the US-Soviet Incidents at Sea Agreement, the nations could establish an international autonomous incidents agreement to reduce the risk of accidents and miscalculation among air and sea drones.

Most of these agreements can and should be multilateral, but reaching agreement between the US and China would be a strong foundation to bring other nations on board. China and the US lead the world in artificial intelligence. As the world’s top economic and military powers, they have an opportunity to lead the world into a future that manages AI risks safely and maintains global peace and stability.

No comments:

Post a Comment