Pages

28 May 2022

The Navy Must Learn to Hide from Algorithms

Lieutenant Andrew Pfau

During World War I, German submarines menaced Allied shipping. Without radar or sonar, the Allies struggled to locate and attack the submarines in the stormy and foggy North Atlantic. To confuse and deceive the enemy, the Allies painted their ships to camouflage them on the ocean. These paint schemes, often called dazzle camouflage, were designed not only to conceal a ship’s presence, but also to complicate the submarine’s fire-control solution by making it more difficult to determine the aspect of the ship. Paint schemes remained in use through World War II and still find occasional use today. In renewed great power competition, the paint scheme deception tactic should not be retired but, instead, scaled for the 21st century.

Instead of relying solely on human detection, as German submarine captains did, modern military systems increasingly rely on artificial intelligence (AI) and machine learning (ML) systems to detect and classify objects in images and video feeds. Algorithms monitor satellite and drone feeds and alert a human if an object of interest is detected. The volume of data generated by these sensor platforms means that human intelligence analysts cannot effectively search for targets without AI/ML algorithms. The Department of Defense (DoD) has developed algorithms to perform this task through Project Maven, an effort to apply computer vision algorithms to drone feeds to identify objects.

According to the Department of the Navy’s Unmanned Campaign Framework, unmanned systems will regularly fly above and sail on and under the ocean, monitoring their surroundings with AI/ML-enabled systems. Unmanned systems will alert human operators only when they find a target they have been trained to find. The large volume of data flowing through these systems and sensors will force humans to spend more time supervising and checking machine produced outputs than examining inputs themselves. Like other evolutions in military technology, this will trigger a series of detection and counterdetection innovations. To find a way to avoid detection, naval assets can look to the past.

Adversary Systems

China is pouring money into AI research. A report by the Center for Security and Emerging Technology at Georgetown University identified that the People's Liberation Army (PLA) spends approximately $1.6 billion each year on AI technology and systems, with automated target recognition being a key area of interest. Another analysis by Stanford University found that Chinese AI researchers have overtaken U.S. researchers in the number of scientific research papers published and citations. This level of investment, coupled with close ties between the PLA, academic researchers, and private companies (China’s version of military-civil fusion), stands in contrast to that of the United States. DoD is well aware of this, stating in a 2020 report on Chinese military power that “the PRC is pursuing a whole-of-society effort to become a global leader in AI, which includes designating select private AI companies in China as ‘AI champions’ to emphasize R&D in specific dual-use technologies.” Major U.S. tech companies have been hesitant to work closely with the DoD on advanced AI systems, fearing their technology will be misused.


The PLA seeks to use these technologies to create “intelligentized warfare,” the incorporation of AI/ML technology into weapon systems. Target detection and classification algorithms can be an integral part of the kill chain, with input from satellites, drones, or cameras on the weapons themselves. Satellite images have recently shown target mockups in the shape of U.S. aircraft carriers, destroyers, and amphibious ships in missile ranges in western China. One possible explanation for the effort to create realistic targets is to test target detection and classification algorithms in the kill chain. Some even speculate about the possibility that PLA ballistic missiles may use visual algorithms in their terminal phase. The PLA hopes to use these and other advanced AI technologies to overwhelm U.S. and allied capabilities in the western Pacific.

How to Deceive Algorithms

Any system that includes computer algorithms to detect and classify objects without human intervention can be classified as an AI-enabled system. Machine learning, a subset of the AI field, includes many of the algorithms and techniques used to sort through the massive volumes of data collected every day. One ML algorithm in particular, “deep learning,” is responsible for most of the explosive growth in the field of ML the last several years and is the predominant method used in computer vision today.

Machine learning algorithms, like humans, are not perfect, and given the right inputs can be fooled by what they see. Attempts to deceive AI/ML-enabled systems are known as adversarial attacks, and when they are present in the physical world, they become physical adversarial attacks. These attacks attempt to trick AI/ML-enabled systems not by changing digital data on a computer, but by making physical changes to the real world. For example, researchers have successfully modified stop signs to appear as speed-limit signs to Tesla cars. Glasses and shirts have been created to fool facial recognition algorithms that may be running on public surveillance cameras. Some attacks simply seek to ensure the system does not predict the correct object type. An attacker may want to protect his or her privacy by obscuring the face to defeat facial recognition algorithms. Other times, an attacker may want the algorithm to predict that the attacker is a particular person, not just anyone. In one study, researchers at Carnegie Mellon University created special glasses frames designed to trick a facial recognition algorithm into thinking the wearer was Brad Pitt.

One challenge of developing ML systems is understanding what the model is learning to associate with a particular object it seeks to classify. This weakness provides an opening for attackers to exploit. For ships, a model could learn that all destroyers and aircraft carriers have gray hulls, while container ships are black, blue, or green. By simply changing the color of a destroyer, a navy could fool this system into classifying it as a container ship.

Deception for the 21st Century

The adversary is already training ship- and aircraft-recognition systems today on images and videos of U.S. ships and aircraft at sea and in port. Deception is an ongoing contest that occurs in peace and war, the United States should act now to deceive and hide from adversary ML systems.

The first deception method to employ would be paint or other markings on ships and aircraft. When researchers tricked a Tesla autopilot into thinking a stop sign was a speed-limit sign, they did so with a few pieces of carefully placed black tape, not by repainting the entire sign. A few well-placed sections of paint could be all it takes to defeat a classification algorithm. However, unlike their 20th-century predecessors, methods to deceive ML systems will have to appear on the entire ship, not just the sides of the hull, to prevent detection by overhead imaging.

Physical adversarial attacks should not be limited to paint modifications. As with the glasses frames used to trick facial-recognition algorithms, the Navy may need to augment ships with physical structures. This could include modifications to the superstructures so their outlines, viewed from the ocean or above, are not distinct. Just as General Dwight D. Eisenhower deployed an army of inflatable tanks and trucks to deceive German intelligence before the June 1944 D-Day invasion of France, ships could have removable devices to modify their shape to confuse adversary algorithms. These decoys would be especially important when sailing into foreign ports or operating in the vicinity of adversary intelligence ships.

But how will attackers know which types of deception will be successful in hiding warships in plain sight? Attackers can start in the digital realm, modifying images of ships and planes to test out what might work. ML models can be trained to produce modified images that evade classifier models. These models will produce methods to disguise ships or aircraft within given limits.

These attacks are further complicated if the attacker can only see the predictions of the system and not the inner workings, a so-called “black-box” attack. While attackers can make educated guesses of how an adversary’s model may work, they will not have access to the model or its predictions. Instead, deceivers will have to use best-guess models they will attempt to trick with modifications to objects of interest. This effort will take the time and talents of the Navy’s civilian engineers, researchers, and academics.

Beyond Visual

Visual detection and classification are not the only means to locate ships or aircraft. Increasingly, radar, sonar, and electromagnetic spectrum emissions locate ships at sea long before they enter visual range. Therefore, deceiving these sensors will be just as important.

To deceive sonar sensors, submarines and unmanned undersea vehicles (UUVs) will have to change their acoustic signatures by changing the sound frequencies they emit. Submarines or UUVs would emit deception sounds to mask their acoustic signature and confuse any system attempting to detect them. These sound emissions would not be used all the time, but rather only when necessary, such as when operating near adversary ships or fixed sensors. Researchers have already been successful in fooling voice- and audio-recognition algorithms. By adding small sound perturbations to songs, they were able to trick the algorithm into detecting the phrase “open the door.”

Acoustic deception techniques would be especially useful for UUVs operating near adversary sonar sensors, areas deemed too dangerous for manned submarines. A manned submarine may deploy a UUV to act as a decoy when conducting approach and attack against adversary ships. Relying solely on a UUV or submarine’s quieting ability may not be feasible in a future conflict with a peer adversary. Instead, tricking adversary systems into not recognizing enemy assets, and thus not alerting human operators, could provide the needed stealth.

Electromagnetic spectrum and radar deception pose significant challenges. While a body of publicly available research exists on attacking visual and acoustic models, no similar research exists in these domains. Because of their specific military applications, defense research will have to spearhead developing methods to evade radar and electromagnetic classifiers.

The New Dazzle Camouflage?

ML systems are more brittle than many people think. Small changes to an image or sound can trick a model into providing the incorrect classification, even while a human can still determine the correct class of object without picking up on small changes. Defeating these models can start now, by making simple changes to paint schemes or ship superstructures. The objective of these disguises will be to trick adversary systems into not recognizing ships, planes, or submarines as U.S. Navy.

Adversaries are not waiting until conflict to train and deploy ML models for detection and classification. Navy planners must think about how the adversary will employ AI/ML systems and begin to develop defensive measures against them. Adversaries already envision future conflict as “algorithmic warfare.” The side that can predict actions and update models with new information faster wins.

In the film adaptation of Patrick O’Brien’s novel Master and Commander, Captain Jack Aubrey disguises HMS Surprise as a British whaler to lure in the French frigate Acheron. The feint is revealed at the last moment, allowing Aubrey and his crew to defeat the more powerful and faster Acheron. Deceiving ML models will be more difficult than deceiving a 19th-century ship captain peering through a dim telescope. As the sensors of war and the methods used to interpret the data from those sensors evolve, so, too, must deception techniques.

No comments:

Post a Comment