Pages

15 August 2018

JAIC: Pentagon debuts artificial intelligence hub

By Jade Leung

In October 2016, the newly formed Defense Innovation Board released its first set of recommendations. (The board, an advisory body to senior leadership in the US Defense Department, contains representatives from the private sector, academia, and nonprofits.) One recommendation that stood out was the establishment of “a centralized, focused, well-resourced organization” within the Defense Department “to propel applied research in artificial intelligence (AI) and machine learning.”

Less than two years later, the Pentagon is already transforming this idea into reality. On June 27, Deputy Defense Secretary Patrick Shanahan issued a memorandum that formally established the Defense Department’s new Joint Artificial Intelligence Center (JAIC). According to the memo, JAIC’s overarching aim is to accelerate the delivery of AI-enabled capabilities, scale the impact of AI tools, and synchronize the department’s AI efforts. To this end, JAIC will guide the execution of so-called National Mission Initiatives—large-scale AI projects “designed to address groups of urgent, related challenges.” Moreover, the National Mission Initiatives—as well as the Defense Department’s adoption of cloud technologies—will be leveraged to enable rapid delivery of AI-enabled capabilities across the department. JAIC will also serve as a platform to improve collaboration on AI-related projects with internal as well as external parties, including private companies and academics.

It is notable that JAIC’s focus will also include ethics, humanitarian considerations, and both short- and long-term AI safety. These issues—according to Brendan McCord, head of machine learning at the Pentagon entity known as Defense Innovation Unit Experimental—will be reflected in the establishment of AI defense principles that will be developed with input from multiple stakeholders. This specific dimension of JAIC, though its parameters remain abstract for now, could play an important role in realizing the Pentagon’s AI ambitions.

Developing, institutionalizing, and communicating AI defense principles transparently could not only reduce the operational risks of AI-enabled systems but also increase US national security by enabling the Pentagon to better access and integrate AI innovation resources in the United States. AI researchers and companies might be more willing to collaborate with the Pentagon if it were to establish transparent guidelines and articulate red lines for the development and deployment of AI systems. Globally, JAIC could serve as an important model for other powers pursuing similar technologies if it demonstrates that a safety-conscious and ethical approach to AI does not compromise national security, but instead can foster it. 

Why does Washington need JAIC? Since the end of the Cold War, the United States has enjoyed virtually unrivaled superpower status in the international order. An important pillar of US power has been its unmatched military-technological superiority. However, the technologies that underpinned the US military’s edge in the past, such as precision-guided weapons, have proliferated to competitors via the forces of globalized technology transfer. These competitor countries have subsequently developed capabilities that increasingly challenge US military supremacy.

The Defense Department, in order to preserve and expand its military advantage in the future, has placed its bets on AI. The department’s Third Offset strategy presents an approach that aims “to exploit all the advances in artificial intelligence and autonomy and to insert them into the Defense Department’s battle networks.” Potential applications in the military domain are diverse, ranging from enhancing the efficiency of logistics systems to more sensitive tasks such as automated command and control in advanced weapons systems. The 2018 National Defense Strategy foresees that AI will likely change the character of war; thus, in Shanahan’s words, the United States “must pursue AI applications with boldness and alacrity.”

A major challenge to the realization of the Defense Department’s AI ambitions is that the capabilities to develop and deploy cutting-edge AI technologies today sit almost exclusively within the domain of private technology companies. For the time being, the commercial AI industry in the United States is the frontrunner globally, measured by indicators including hardware design and concentration of the most talented AI researchers. In turn, the Pentagon has struggled to develop in-house research and development capabilities that come close to competing with private sector efforts. Steadily, the Defense Department has come to realize the necessity of adapting to a reality in which the military—which has historically been the cradle of game-changing technologies—is no longer the focal point of progress for the US technology base. Rather, the military’s technology strategy is ever more reliant on a technology base pioneered and controlled by commercial actors.

For the Defense Department, access to the private sector’s AI resources is complicated by a perceived “clash of cultures” between Silicon Valley and Washington. One element of this relationship hinges on the notion that “Silicon Valley values” would be tainted by engagement with the military as clients, funders, or even supporters. The strength of this antagonism is surprising—given that, in the earliest days of Silicon Valley, the Defense Department was both a major investor in and client of high-tech firms. Indeed, relationships with the Defense Department were a necessary boost for technology companies competing to gain traction. Despite these early forms of collaboration, a rift separates the two sides today.

The chasm has been underscored by the Defense Department’s recent efforts to engage Google in deploying AI. When Google’s involvement in Project Maven—an Air Force initiative with the goal of automating analysis of video footage gathered by drones—became public earlier this year, the revelation sparked a major controversy within and outside the company. Thousands of Google employees protested the company’s involvement in “the business of war” with an open letter and called on the company to withdraw from Project Maven. Several employees resigned. Google eventually decided not to renew the contract, and quickly released a set of AI principles to quell the public backlash. This debacle forebodes what is likely to be a consistent conflict—one in which private companies such as Google are caught between their public-facing values, which are inconsistent with the weaponization of AI, and their incentives to pursue ever more business opportunities with the likes of the Defense Department.

A second dimension of the clash of cultures involves the varying speeds at which Silicon Valley and the Defense Department innovate and operate. Clearly, the accelerating pace of R&D cycles in the private sector stands in sharp contrast to traditional military acquisition processes, which often span years. More concretely, the sheer amount of time and resources required to jump the hoops of the Defense Department’s lengthy bureaucratic procedures are a cost that many companies are unwilling to incur. That said, the department has made several attempts to introduce more agility into its acquisition practices to improve opportunities for collaboration with private entities. These efforts have yielded mixed results so far.

A recent initiative was the foundation in 2015 of the aforementioned Defense Innovation Unit Experimental—the brainchild of former Secretary of Defense Ashton Carter—with offices situated in the key US innovation centers. The goals of the unit are to rebuild relationships between the Defense Department and commercial technology companies and to quickly match needs arising in the department with appropriate technological solutions. To this end, the unit has introduced the so-called Commercial Solutions Opening award mechanism, a rapid and flexible contracting process that allows the Defense Department to work at the speed of the private sector. While some have questioned the unit’s effectiveness and necessity, others have praised the initiative, arguing that it is changing the department’s business practices and improving collaboration with the private sector.

The China challenge. The United States is not the only country that aims to outstrip competitors by bringing AI to the battlefield. China, for example, has set itself the eye-catching target of leading the global AI industry by 2030. China’s AI ambitions, of course, are not limited to the commercial AI industry—they are also linked to Beijing’s military modernization efforts.

Although the Chinese People’s Liberation Army has not yet publicly adopted an official AI strategy, it seems to view AI as an opportunity to leapfrog the United States. Such a plan is eyed with particular concern in Washington due to China’s ostensibly seamless treatment of civilian and military R&D resources for artificial intelligence, as conceptualized in Beijing’s military-civil fusion model. Unlike in the United States, government and private companies are closely linked in China, which ensures smoother cooperation. Therefore, China appears to be in a more favorable position than the United States to exploit the innovation potential of the private sector to enhance both economic and military advantage. This perception has only escalated pressure on the US military to find effective, efficient ways of leveraging AI to stay in the running with competitors.

Can JAIC help overcome these challenges? While the Pentagon has provided limited details on JAIC’s specific plans and operations, it appears from the program’s high-level mandates that it could lay the groundwork for an AI defense strategy that would solve both the structural and cultural challenges that currently hamper the Defense Department.

A substantial part of JAIC’s stated remit appears to be focused on organizing and streamlining underlying infrastructure so that the US military will be able to mobilize AI more efficiently and effectively. JAIC will lead the execution of National Mission Initiatives—coordinating across military departments and services, the Joint Chiefs of Staff, combatant commands, and several other components of the Defense Department. JAIC’s establishing memorandum emphasizes the need to develop department-wide tools, technologies, and processes to build a common foundation for executing the nation’s AI ambitions, and the department is signaling that JAIC will be the institutional entity leading such efforts.

In this vein, it seems likely that JAIC will contribute at least in part to establishing a more coherent, coordinated effort among the numerous federal agencies invested in the integration of AI. By centralizing and standardizing core inputs to AI applications, such as shared data and protocols, JAIC could substantially decrease the costs of developing and deploying AI. Further, if such improvements extend to further reforming the technology acquisition process, JAIC could in theory reduce transaction costs for private AI companies engaging with the government, making such business propositions more attractive.

However, improvements in efficiency are insufficient to wholly address the barriers to the US government’s AI ambitions. In order to encourage the emergence of productive relationships between federal agencies and leading AI companies and researchers, JAIC—and other elements of the Defense Department’s broader AI strategy—will need to credibly commit to approaching AI development and deployment with ethics and safety at the forefront. This is critical for securing cooperation from the private sector and academia—and in order for the US military to maintain credibility and integrity on the international stage.

Where cooperation with the private sector is concerned, recent controversies highlight the fact that AI researchers and private sector leaders regard ethics and safety as non-negotiable commitments. Project Maven is only one of several indications that tensions over private sector engagement with government agencies is far from resolved. Amazon and Microsoft have recently come under fire for selling AI-enabled facial recognition software to immigration and law enforcement agencies. Recently, the Future of Life Institute garnered support from several leading AI companies, including Element AI and Google’s DeepMind, in a letter calling for governments to ban lethal autonomous weapons. The letter was also signed by numerous AI researchers.

AI companies and researchers are facing a defining challenge—when and how to engage with the national security community. Some maintain that a blanket boycott is the only route forward. Others take the view that AI’s integration into the military is inevitable, and thus the task is to shape AI’s integration so that it is achieved in an ethically grounded fashion. If the US government fails to articulate a principled stance toward ethical and safety concerns, it will alienate both camps. However, if JAIC becomes a pioneer in signaling commitment to ethics and safety, it could invite a new wave of collaboration with the private sector—based on a common, nuanced understanding of how AI should and shouldn’t be deployed by the United States.

On the international stage, JAIC likewise offers an important opportunity for the United States to signal to other countries its commitment to the safety and security of AI systems as they are developed and deployed. Precedent for strong safety norms exists within US military institutions, most notably among Navy submariners, aircraft carrier deck operators, and Aegis weapon system operators. A safety-oriented approach to the integration of AI is thus a natural extension of US military conduct—and will be necessary, though challenging. Leading AI researchers have already identified a number of ways in which modern machine learning systems still lack robustness—and thus are vulnerable to causing accidents unforeseen by their human designers. Such risks can be magnified by institutional and operational errors which, in the context of the US military, have sometimes had grave consequences. Such errors will likely become more prevalent with the increasing complexity of deployed technological systems.

The US military should thus place a priority on mitigating risks associated with unintended consequences of AI-enabled systems, and set a standard for other militaries that pursue similar technologies. It is highly likely that a safety-conscious mandate would be received well abroad, and would help forge productive alliances with the likes of Canada and the European Union, both of which have highlighted the importance of ethics and safety in their respective AI strategies. Such an approach could also reduce the risk that rivalry dynamics between the United States and China would devolve into a “race to the bottom,” whereby each country takes shortcuts on enforcing the safety and robustness of its AI systems to gain an advantage over the other—resulting in substantial risk of accidental harm.

As it stands, JAIC is well positioned as a vehicle for signaling credible US commitment to an ethical and safe approach to the pursuit of AI. Indeed, one of JAIC’s core mandates is to work with the Office of the Secretary of Defense to develop standards and a governance framework for AI development and delivery. In establishing JAIC, the Defense Department has signaled its intention to ensure a “strong commitment to military ethics and AI safety.” Thus JAIC—as it becomes a focal point for propelling AI forward as a crucial element of US national defense—presents a timely opportunity for the United States to articulate a robust approach to ethical and safe artificial intelligence

No comments:

Post a Comment