3 November 2023

Let’s Talk About AI on the Battlefield

James Stavridis

The White House has released a sweeping executive order on artificial intelligence, which is notable in a number of ways. Most significantly, it establishes an early-stage means of regulation for the controversial technology which promises to have a vast impact on our lives. The new approach is being closely coordinated with the European Union and was initially announced by the administration in July. President Joe Biden further highlighted the proposal during a September meeting with his Council of Advisors on Science and Technology in San Francisco.

The executive order will push a high degree of private-public cooperation and is timed to come out just days before Silicon Valley leaders gather with international government officials in the United Kingdom to look at both the dangers and benefits of AI. It will also require detailed assessments — think of drug testing by the FDA — before specific AI models could be used by the government. The new regulations will seek to bolster the cybersecurity aspects of AI and make it easier for brainy technologists — the H-1B program candidates -- to immigrate to the United States.

Most of the key actors in the AI space seem to be on board with the thrust of the new regulations, and companies as varied as the chipmaker Nvidia and Open AI have already made voluntary agreements to regulate the technology along the lines of the executive order. Google is also fully involved, as is Adobe which makes Photoshop, a key area of concern because of the potential for AI manipulation. The National Institute of Standards and Technology will lead the government side in creating a framework for risk assessment and mitigation.

All of this represents sensible steps in the right direction. But something is missing, at least at an unclassified level. There are no indications of similar efforts in the sphere of military activity. What should the US and its allies be considering in terms of regulating AI in that sphere, and how can we convince adversaries to be involved?

First, we need to consider the potentially significant military aspects of AI. Like other pivotal moments in military history, such as the introduction of the long bow, the invention of gunpowder, the creation of rifled barrels, the arrival of airplanes and submarines, the development of long-range sensors, cyber on the battlefield or the advent of nuclear weapons, AI will rearrange the battlefield in significant ways.

For example, AI will allow decision makers to instantly surveille all military history and select the best path to a victory. Imagine an admiral who can simultaneously be afforded the advice of every successful predecessor from Lord Nelson at Trafalgar to Spruance at Midway to Sir Sandy Woodward at the Falklands? Conversely, AI will also be able to accurately predict logistical and technological failure points. What if the Russians had been able to use AI to correct their glaring faults in logistics and vulnerability to drones in the early days of Ukraine war?

The ability to spoof intelligence collection through manipulated images spread instantly throughout social networks and driven directly into the sensors of satellites and radars may be possible. AI can also speed the invention and distribution of new forms of offensive military cyberattacks, overcoming current levels of crypto protection. It could, for example, convince an enemy sensor system that a massive battle fleet was approaching its shores — while the main attack was actually occurring from space.

All of this and much more are coming at an accelerated pace. Look at the timelines: It took a couple of centuries for gunpowder to fulfill its lethal potential. Military aviation went from Kitty Hawk to massive aerial bombing campaigns in less than 40 years. AI will likely have dramatic military impact within a decade or even sooner.

As we have with nuclear weapons, we are going to need to develop military guard rails around AI—like arms control agreements in the nuclear sphere. In creating these, some of what decision-makers must consider includes potential prohibitions on lethal decision-making by AI (keeping a human in the loop); restrictions on using AI to attack nuclear command and control systems; a Geneva Conventions-like set of rules prohibiting manipulation or harm to civilian populations using AI-generated images or actions; and limits on size and scale of AI driven “swarm” attacks by small, deadly combinations of unmanned sensors and missiles.

Using the 1972 Cold War “Incidents at Sea” protocols as a model might make sense. The Soviet Union and the United States agreed to limit closure distances between ships and aircraft; refrain from simulated attacks or manipulation of fire control radars; exchange honest information about operations under certain circumstances; and take measures to limit damage to civilian vessels and aircraft in the vicinity. The parallels are obviously far from exact, but the idea — having a conversation about reducing the risk of disastrous military miscalculation — makes sense.

Starting a conversation within NATO could be a good beginning, setting a sensible course for the 32 allied nations in terms of military developments in AI. Then comes the hard part: broadening the conversation to include at a minimum China and Russia, both of whom are attempting to outrace the US in every aspect of AI.

The Biden administration is on the right path with the new executive order. Certainly, we need to get the tech sector on board in terms of considering the risks and benefits of AI. But it is also high time to get the Pentagon cracking on the military version of such regulations, and not just including our friends in the West.

No comments: