19 February 2021

COLLABORATION OR CHAOS: TWO FUTURES FOR ARTIFICIAL INTELLIGENCE AND US NATIONAL SECURITY

Bilva Chandra

The future of US national security will be monumentally shaped by how machines transform and accelerate the growth of humanity. Machine learning has already resulted in cutting-edge developments such as a natural language processing model called GPT-3 that can imitate human text, deep fakes enabled by general adversarial networks, and AlphaGo, the very first computer program to beat a professional Go player. Human-machine teaming is the vanguard of the future of military and defense innovation. However, effectively leveraging this mind-boggling phenomenon—and mitigating the dangers of an adversary’s nefarious use of it—is unimaginable without public-private partnerships.

Currently, much of the debate surrounding AI public-private partnerships is characterized by tech employees refusing to directly aid in what they view as the business of war. However, the actual sentiments by tech professionals toward collaboration with DoD are more nuanced. According to a survey conducted by Georgetown University’s Center for Security and Emerging Technology, only a small minority—7 percent—of respondents expressed extremely negative feelings about working on DoD AI projects. Still, the survey found that many AI professionals associate such projects with “killer drones.” Due to DoD’s technical readiness and modernization goals, however, the government’s involvement in the commercial development of emerging technologies cannot be avoided on a broader scale. For this reason, increased awareness in the private sector regarding the defense applications of AI is crucial.

The US government and private sector companies must partner on artificial intelligence; without such collaboration, the United States is at risk of both trailing behind its adversaries on AI development and failing to establish ethical AI frameworks.

Adversarial Applications of Military AI

Artificial intelligence is at the forefront of US great power competition with China, and the Chinese Communist Party (CCP) has displayed a deep commitment to developing AI. The CCP’s Military-Civil Fusion (MCF) strategy seeks to take advantage of the private sector to develop core technologies such as AI, quantum computing, semiconductors, big data, 5G, and others. Though only in its early stages, it aims to massively mobilize civilian economic sectors to serve the CCP’s defense ambitions by offering incentives to Chinese enterprises. Total investment for MCF over the span of several years is estimated at $68.5 billion. China is able to directly fund civilian innovation for military applications more directly than the United States, due to its unique system of authoritarian state capitalism.

The increasing expansion of AI in commercial products also raises the risk of nonstate actors employing AI maliciously. Apart from state adversaries, dangerous nonstate actors such as the Islamic State have dabbled in rudimentary AI technology, specifically drones, for use in their operations. Most standard commercial drones use computer-vision technology, which detects, classifies, and tracks objects. Unfortunately, the dual-use nature of drones and AI-enabled tech makes their regulation and proliferation difficult to control. The Islamic State’s drone of choice is the DJI Phantom, a model with obstacle sensing and avoidance capabilities, manufactured in China. Furthermore, the group has refurbished commercial drones to fit its purposes and used them for not just for reconnaissance and recording aerial videos, but also for geographic mapping and delivering explosives. By using commercially available drones, the Islamic State has been able to conduct reconnaissance and intelligence operations swiftly, while reducing threats to its fighters. As commercial drones become more advanced with enhanced machine-learning capabilities, nonstate groups will capitalize on their increased sophistication. According to a 2018 report, there is a credible threat of terrorists repurposing autonomous vehicles and advanced commercial AI drones for explosives delivery and targeted assaults. Nonstate actors’ interest in emerging technologies and AI is a dangerous trend with future implications for escalation.

No nation or private organization has a monopoly on AI technologies; therefore, we cannot directly curb the development of AI by nonstate actors or state adversaries. In fact, most AI scientists and researchers openly publish their algorithms, code libraries, and training data sets—the integral pieces that can be assembled by individuals and put to use, including for malign purposes. The United States and its allies could fall behind the AI curve if public-private collaboration is not prioritized, as its adversaries continue to accelerate technological acquisition and usage.

Collaboration for Ethical AI

Public-private collaboration is instrumental in the creation of ethical AI frameworks for US defense and national security and must expand to prevent the future detrimental use of AI. Several private-sector companies and academic consortiums have already developed their own guidelines. For example, the Institute of Electrical and Electronics Engineers has published a global treatise on AI, which emphasizes ethically designed AI systems that promote universal human values, and the Partnership on AI to Benefit People and Society unites civil society and research groups to create best practices, increase public awareness, and serve as a discussion forum for AI. Additionally, within DoD itself, DARPA is an excellent example of how private-sector expertise paired with government support produces ethical innovation. For example, DARPA’s Urban Reconnaissance through Supervised Autonomy program aims to prevent civilian casualty issues on the battlefield, falling well within ethical norms for AI use.

The value of public-private and cross-sector initiatives goes beyond just defense applications. In light of the recent domestic siege against the US Capitol by violent hostile actors, conversations surrounding online content moderation and Big Tech’s obligation to protect democracy have skyrocketed. Artificial intelligence is at the forefront of this challenge. Specifically, developing algorithmic accountability to mitigate AI’s blackbox dilemma is instrumental in reducing algorithmic bias, and conducting sustainable and repeatable content moderation policies. Effective content moderation is vital for the US government’s national security interests, and now increasingly more important for the brand reputations of technology companies such as Twitter, Facebook, and Apple—which means there are real incentives to work together.

Recommendations

There is a wide array of avenues for promising cooperation between public and private entities. US policymakers can work with technology companies to strengthen operational security, incentivize socially beneficial AI research, and enforce intellectual property regimes—all key steps toward the overarching objective of reducing the probability of the malign usage of AI by both non-state and state actors. Expanding public-private partnerships on AI will ensure that the United States does not trail China in AI strategy. While the United States may not have an equivalent of the CCP’s MCF strategy, through mutually beneficial contracts and private-sector incentives, it can bridge the gap in its ongoing AI race with China. Lastly, the US government can bolster ethical frameworks for AI by working in tandem with private entities and research institutions to utilize pre-existing private-sector guidelines for AI use, work on refining content moderation policies and machine-learning algorithms with Big Tech to help prevent offline threats, and use the private sector’s technical expertise for ethical AI designs for the battlefield.

Artificial intelligence is both a thrilling beacon of modernization for the government, and an area of promising growth for private firms. Neither can afford to silo themselves, as a lack of collaboration will hinder both US national security interests and opportunities for private-sector innovation. The labyrinthine threat environment of unyielding US adversarial interests and the need for ethical AI frameworks both require cooperation; without it, we are doomed to chaos.

Bilva Chandra is a data analyst at Zignal Labs, a media intelligence firm, and master’s student in the Georgetown Security Studies Program, focusing on technology and security. Bilva has been previously published as a coauthor in The Strategy Bridge of a piece about Biosecurity in a Post-COVID-19 America, and was featured in two panels on Zignal’s Disinformation and Social Movements Town Hall and the US Army Future Command’s Mad Scientist event to present original group research on deplatforming effects. Her areas of expertise and interests include mis/disinformation, domestic extremism, artificial intelligence, data analytics, data privacy/content moderation issues, and public sector advisory.

The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

No comments: