17 June 2022

Exploring the Civil-Military Divide over Artificial Intelligence

James Ryseff, Eric Landree, Noah Johnson, Bonnie Ghosh-Dastidar

Artificial intelligence (AI) is anticipated to be a key capability for enabling the U.S. military to maintain its military dominance. The U.S. Department of Defense (DoD)'s engagement with leading high-tech private sector corporations, for which the military is a relatively small percentage of their customer base, provides a valuable conduit to cutting-edge AI-enabled capabilities and access to leading AI software developers and engineers. To assess the views of software engineers and other technical staff in the private sector about potential DoD applications of AI, a research team conducted a survey that presented a variety of scenarios describing how the U.S. military might employ AI and asked respondents to describe their comfort level with using AI in these ways. The scenarios varied several factors, including the degree of distance from the battlefield, the destructiveness of the action, and the degree of human oversight over the AI algorithm. The results from this survey found that most of the U.S. AI experts do not oppose the basic mission of DoD or the use of AI for many military applications.

Key Findings

An unbridgeable divide between Silicon Valley and DoD does not appear to existRespondents from Silicon Valley technology firms and alumni of universities with top-ranking computer science departments are comfortable with a variety of military applications for AI.

There is a meaningful difference in the comfort level for AI applications that involve the use of lethal forceAbout one-third of respondents from the three surveyed Silicon Valley technology corporations were uncomfortable with lethal use cases for AI.

Tech workers have low levels of trust in leaders—even their ownSoftware engineers and other technology workers have low levels of trust in individuals who hold leadership positions.

Technology workers trust CEOs of technology companies almost as little as they trust elected officials or the heads of federal agencies.

Tech workers are most concerned about cyber threats to the United StatesMore than 75 percent of respondents from all three populations also regarded China and Russia as serious threats to the United States.

Tech workers support the use of military force to defend against foreign aggressionSurvey respondents strongly supported using military force to defend the United States and its NATO allies from foreign aggression, with nearly 90 percent of participants finding the use of military force to be justified under these circumstances.

Silicon Valley tech workers have little personal connection to the militaryLess than 2 percent of Silicon Valley respondents had served in the U.S. armed forces.

Almost 20 percent of software engineers working at defense contractors had previously served in the U.S. military.

Recommendations

Mechanisms should be explored to expand collaborations between DoD and Silicon Valley companies regarding threats posed by cyberattacks, a potential application for AI that Silicon Valley engineers see as a critical global threat.

Expansion of engagements among personnel involved with military operations, DoD technical experts, and Silicon Valley individual contributors (nonmanagerial employees) working in technical roles should be explored to assess possible conduits for developing greater trust between the organizations.

The potential benefits of DoD engaging Silicon Valley engineers on some of the details of how DoD would use AI should be explored; also, review how the military considers the nuanced and complex situations in which AI would be used.

The value of establishing opportunities for DoD and Silicon Valley employees to engage over shared values and principles and the potential benefits of doing so should be investigated. The recently published DoD ethical principles for AI demonstrate that DoD itself is uncomfortable with some potential uses for AI: This could serve as the foundation for a conversation with Silicon Valley engineers about what AI should and should not be used for.

Another potentially fruitful area for investigation would be assessing the benefits and adapting various types of engagements to help the most innovative and experienced U.S. AI experts learn how DoD accomplishes its mission and discover how their talents and expertise can contribute to solving DoD's and the nation's problems.

No comments: