Pages

2 December 2020

Opinion/Middendorf: Artificial intelligence and the future of warfare

By J. William Middendorf

J. William Middendorf, who lives in Little Compton, served as Secretary of the Navy during the Ford administration. His recent book is "The Great Nightfall: How We Win the New Cold War."

Thirteen days passed in October 1962 while President John F. Kennedy and his advisers perched at the edge of the nuclear abyss, pondering their response to the discovery of Russian missiles in Cuba. Today, a president may not have 13 minutes. Indeed, a president may not be involved at all. 

“Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.” 

This statement from Vladimir Putin, Russian president, comes at a time when artificial intelligence is already coming to the battlefield and some would say it is already here. Weapons systems driven by artificial intelligence algorithms will soon be making potentially deadly decisions on the battlefield. This transition is not theoretical. The immense capability of large numbers of autonomous systems represent a revolution in warfare that no country can ignore. 

The Russian Military Industrial Committee has approved a plan that would have 30% of Russian combat power consist of remote controlled and autonomous robotic platforms by 2030. China has vowed to achieve AI dominance by 2030. It is already the second-largest R&D spender, accounting for 21% of the world’s total of nearly $2 trillion in 2015. Only the United States at 26% ranks higher. If recent growth rates continue, China will soon become the biggest spender. 

If China makes a breakthrough in crucial AI technology — satellites, missiles, cyber-warfare or electromagnetic weapons — it could result in a major shift in the strategic balance. China’s leadership sees increased military usage of AI as inevitable and is aggressively pursuing it. Zeng Yi, a senior executive at China’s third-largest defense company, recently predicted that in future battlegrounds there will be no people fighting, and, by 2025, lethal autonomous weapons would be commonplace. 

Well-intentioned scientists have called for rules that will always keep humans in the loop of the military use of AI. Elon Musk, founder of Tesla, has warned that AI could be humanity’s greatest existential threat for starting a third world war. Musk is one of 100 signatories calling for a United Nations-led ban of lethal autonomous weapons. These scientists forget that countries like China, Russia, North Korea and Iran will use every form of AI if they have it. 

Recently, Diane Greene, CEO of Google, announced that her company would not renew its contract to provide recognition software for U.S. military drones. Google had agreed to partner with the Department of Defense in a program aimed at improving America’s ability to win wars with computer algorithms. 

The world will be safer and more powerful with strong leadership in AI. Here are three steps we should take immediately. 

◘ Convince technological companies that refusal to work with the U.S. military could have the opposite effect of what they intend. If technology companies want to promote peace, they should stand with, not against, the U.S, defense community. 

◘ Increase federal spending on basic research that will help us compete with China, Russia, North Korea and Iran in AI. 

◘ Remain ever alert to the serious risk of accidental conflict in the military applications of machine learning or algorithmic automation. Ignorant or unintentional use of AI is understandably feared as a major potential cause of an accidental war.

No comments:

Post a Comment