Pages

24 May 2020

For ethical artificial intelligence, security is pivotal

Jan Kallberg

The market for artificial intelligence is growing at an unprecedented speed, not seen since the introduction of the commercial Internet. The estimates vary, but the global AI market is assumed to grow 30 to 60 percent per year. Defense spending on AI projects is increasing at even a higher rate when we add wearable AI and systems that are dependent on AI. The defense investments, such as augmented reality, automated target recognition, and tactical robotics, would not advance at today’s rate without the presence of AI to support the realization of these concepts.

The beauty of the economy is responsiveness. With an identified “buy” signal, the market works to satisfy the need from the buyer. Powerful buy signals lead to rapid development, deployment, and roll-out of solutions, knowing that time to market matters.

My concern is based on earlier analogies when the time to market prevailed over conflicting interests. One example is the first years of the commercial internet, the introduction of remote control of supervisory control and data acquisition (SCADA) and manufacturing, and the rapid growth of the smartphone apps. In each of these cases, security was not the first thing on the developer’s mind. Time to market was the priority. This exposure increases with an economically sound pursuit to use commercial off the shelf products (COTS) as sensors, chipsets, functions, electric controls, and storage devices can be bought on the civilian market for a fraction of the cost. These COTS products cut costs, give the American people more defense and security for the money, and drive down the time to conclude the development and deployment cycle.


The Department of Defense has adopted five ethical principles for the department’s future utilization of AI. These principles are: responsible, equitable, traceable, reliable, and governable. The common denominator in all these five principles is cybersecurity. If the cybersecurity of the AI application is inadequate, these five adopted principles can be jeopardized and no longer steer the DOD AI implementation.

The future AI implementation increases the attack surface radically, and of concern is the ability to detect manipulation of the processes, because, for the operators, the underlying AI processes are not clearly understood or monitored. A system that detects targets from images or from a streaming video capture, where AI is used to identify target signatures, will generate decision support that can lead to the destruction of these targets. The targets are engaged and neutralized. One of the ethical principles for AI is “responsible.” How do we ensure that the targeting is accurate? How do we safeguard that neither the algorithm is corrupt or that sensors are not being tampered with to produce spurious data? It becomes a matter of security.

In a larger conflict, where ground forces are not able to inspect the effects on the ground, the feedback loop that invalidates the decisions supported by AI might not reach the operators in weeks. Or it might surface after the conflict is over. A rogue system can likely produce spurious decision support for longer than we are willing to admit.

Of all the five principles “equitable” is the area of highest human control. Even if controlling embedded biases in a process is hard to detect, it is within our reach. “Reliable” relates directly to security because it requires that the systems maintain confidentiality, integrity, and availability.

If the principle “reliable” requires cybersecurity vetting and testing, we have to realize that these AI systems are part of complex technical structures with a broad attack surface. If the principle “reliable” is jeopardized, then “traceable” becomes problematic, because if the integrity of AI is questionable, it is not a given that “relevant personnel possess an appropriate understanding of the technology.”

The principle “responsible” can still be valid, because deployed personnel make sound and ethical decisions based on the information provided even if a compromised system will feed spurious information to the decisionmaker. The principle “governable” acts as a safeguard against “unintended consequences.” The unknown is the time from when unintended consequences occur and until the operators of the compromised system understand that the system is compromised.

It is evident when a target that should be hit is repeatedly missed. The effects can be observed. If the effects can not be observed, it is no longer a given that that “unintended consequences” are identified, especially in a fluid multi-domain battlespace. A compromised AI system for target acquisition can mislead targeting, acquiring hidden non-targets that are a waste of resources and weapon system availability, exposing the friendly forces for detection. The time to detect such a compromise can be significant.

My intention is to visualize that cybersecurity is pivotal for AI success. I do not doubt that AI will play an increasing role in national security. AI is a top priority in the United States and to our friendla foreign partners, but potential adversaries will make the pursuit of finding ways to compromise these systems a top priority of their own.

Jan Kallberg, Ph.D., is a research scientist at the Army Cyber Institute at West Point, the managing editor for the Cyber Defense Review, and an assistant professor at the U.S. Military Academy. The views expressed are those of the authors and do not reflect the official policy or position of the Army Cyber Institute at West Point, the U.S. Military Academy, or the Department of Defense.

No comments:

Post a Comment