Pages

22 July 2018

An Assessment of the Likely Roles of Artificial Intelligence and Machine Learning Systems in the Near Future


Ali Crawford has an M.A. from the Patterson School of Diplomacy and International Commerce where she focused on diplomacy, intelligence, cyber policy, and cyber warfare. She tweets at @ali_craw. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group. Title: An Assessment of the Likely Roles of Artificial Intelligence and Machine Learning Systems in the Near Future Summary: While the U.S. Department of Defense (DoD) continues to experiment with Artificial Intelligence (AI) as part of its Third Offset Strategy, questions regarding levels of human participation, ethics, and legality remain. Though a battlefield in the future will likely see autonomous decision-making technology as a norm, the transition between modern applications of artificial intelligence and potential applications will focus on incorporating human-machine teaming into existing frameworks.


Text: In an essay titled Centaur Warfighting: The False Choice of Humans vs. Automation, author Paul Scharre concludes that the best warfighting systems will combine human and machine intelligence to create hybrid cognitive architectures that leverage the advantages of each[1]. There are three potential partnerships. The first potential partnership pegs humans as essential operators, meaning AI cannot operate without its human counterpart. The second potential partnership tasks humans as the moral agents who make value-based decisions which prevent or promote the use of AI in combat situations. Finally, the third potential partnership, in which humans are fail-safes, give more operational authority to AI systems. The human operator only interferes if the system malfunctions or fails. Artificial intelligence, specifically autonomous weapons systems, are controversial technologies that have the capacity to greatly improve human efficiency while reducing potential human burdens. But before the Department of Defense embraces intelligent weapons systems or programs with full autonomy, more human-machine partnerships to test to viability, legality, and ethical implications of artificial intelligence will likely occur.

To better understand why artificial intelligence is controversial, it is necessary to distinguish between the arguments for and against using AI with operational autonomy. In 2015, prominent artificial intelligence experts, including Steven Hawking and Elon Musk, penned an open letter in which the potential benefits for AI are highlighted, but are not necessarily outweighed by the short-term questions of ethics and the applicability of law[2]. A system with an intelligent, decision-making brain does carry significant consequences. What if the system targets civilians? How does international law apply to a machine? Will an intelligent machine respond to commands? These are questions with which military and ethical theorists grapple.

For a more practical thought problem, consider the Moral Machine project from the Massachusetts Institute of Technology[3]. You, the judge, are presented with two dilemmas involving intelligent, self-driving cars. The car encounters break failure and must decide what to do next. If the car continues straight, it will strike and kill x number of men, women, children, elderly, or animals. If the car does not swerve, it will crash into a barrier causing immediate deaths of the passengers who are also x number of men or women, children, or elderly. Although you are the judge in Moral Machine, the simulation is indicative of ethical and moral dilemmas that may arise when employing artificial intelligence in, say, combat. In these scenarios, the ethical theorist takes issue with the machine having the decision-making capacity to place value on human life, and to potentially make irreversible and damaging decisions.

Assuming autonomous weapons systems do have a place in the future of military operations, what would prelude them? Realistically, human-machine teaming would be introduced before a fully-autonomous machine. What exactly is human-machine teaming and why is it important when discussing the future of artificial intelligence? To gain and maintain superiority in operational domains, both past and present, the United States has ensured that its conventional deterrents are powerful enough to dissuade great powers from going to war with the United States[4]. Thus, an offset strategy focuses on gaining advantages against enemy powers and capabilities. Historically, the First Offset occurred in the early 1950s upon the introduction of tactical nuclear weapons. The Second Offset manifested a little later, in the 1970s, with the implementation of precision-guided weapons after the Soviet Union gained nuclear parity with the United States[5]. The Third Offset, a relatively modern strategy, generally focuses on maintaining technological superiority among the world’s great powers.

Human-machine teaming is part of the Department of Defense’s Third Offset strategy, as is deep learning systems and cyber weaponry[6]. Machine learning systems relieve humans from a breadth of burdening tasks or augment operations to decrease potential risks to the lives of human fighters. For example, in 2017 the DoD began working with an intelligent system called “Project Maven,” which uses deep learning technology to identify objects of interest from drone surveillance footage[7]. Terabytes of footage are collected each day from surveillance drones. Human analysts spend significant amounts of time sifting through this data to identify objects of interest, and then they begin their analytical processes[8]. Project Maven’s deep-learning algorithm allows human analysts to spend more time practicing their craft to produce intelligence products and less time processing information. Despite Google’s recent departure from the program, Project Maven will continue to operate[9]. Former Deputy Defense Secretary Bob Work established the Algorithmic Warfare Cross-Functional Team in early 2017 to work on Project Maven. In the announcement, Work described artificial intelligence as necessary for strategic deterrence, noting “the [DoD] must integrate artificial intelligence and machine learning more effectively across operations to maintain advantages over increasingly capable adversaries and competitors[10].”

This article collectively refers to human-machine teaming as processes in which humans interact in some capacity with artificial intelligence. However, human-machine teaming can transcend multiple technological fields and is not limited to just prerequisites of autonomous weaponry[11]. Human-robot teaming may begin to appear as in the immediate future given developments in robotics. Boston Dynamics, a premier engineering and robotics company, is well-known for its videos of human- and animal-like robots completing everyday tasks. Imagine a machine like BigDog working alongside human soldiers or rescue workers or even navigating inaccessible terrain[12]. These robots are not fully autonomous, yet the unique partnership between human and robot offers a new set of opportunities and challenges[13].

Before fully-autonomous systems or weapons have a place in combat, human-machine teams need to be assessed as successful and sustainable. These teams have the potential to improve human performance, reduce risks to human counterparts, and expand national power – all goals of the Third Offset Strategy. However, there are challenges to procuring and incorporating artificial intelligence. The DoD will need to seek out deeper relationships with technological and engineering firms, not just defense contractors.

Using humans as moral agents and fail-safes allow the problem of ethical and lawful applicability to be tested while opening the debate on future use of autonomous systems. Autonomous weapons will likely not see combat until these challenges, coupled with ethical and lawful considerations, are thoroughly regulated and tested.

Endnotes:

[1] Paul Scharre, Temp. Int’l & Comp. L.J., “Centaur Warfighting: The False Choice of Humans vs. Automation,” 2016, https://sites.temple.edu/ticlj/files/2017/02/30.1.Scharre-TICLJ.pdf

[2] Daniel Dewey, Stuart Russell, Max Tegmark, “Research Priorities for Robust and Beneficial Artificial Intelligence,” 2015, https://futureoflife.org/data/documents/research_priorities.pdf?x20046

[3] Moral Machine, http://moralmachine.mit.edu/

[4] Cheryl Pellerin, Department of Defense, Defense Media Activity, “Work: Human-Machine Teaming Represents Defense Technology Future,” 8 November 2015, https://www.defense.gov/News/Article/Article/628154/work-human-machine-teaming-represents-defense-technology-future/

[5] Ibid.

[6] Katie Lange, DoDLive, “3rd Offset Strategy 101: What It Is, What the Tech Focuses Are,” 30 March 2016, http://www.dodlive.mil/2016/03/30/3rd-offset-strategy-101-what-it-is-what-the-tech-focuses-are/; and Mackenzie Eaglen, RealClearDefense, “What is the Third Offset Strategy?,” 15 February 2016, https://www.realcleardefense.com/articles/2016/02/16/what_is_the_third_offset_strategy_109034.html

[7] Cheryl Pellerin, Department of Defense News, Defense Media Activity, “Project Maven to Deploy Computer Algorithims to War Zone by Year’s End,” 21 July 2017, https://www.defense.gov/News/Article/Article/1254719/project-maven-to-deploy-computer-algorithms-to-war-zone-by-years-end/

[8] Tajha Chappellet-Lanier, “Pentagon’s Project Maven responds to criticism: ‘There will be those who will partner with us’” 1 May 2018, https://www.fedscoop.com/project-maven-artificial-intelligence-google/

[9] Tom Simonite, Wired, “Pentagon Will Expand AI Project Prompting Protests at Google,” 29 May 2018, https://www.wired.com/story/googles-contentious-pentagon-project-is-likely-to-expand/

[10] Cheryl Pellerin, Department of Defense, Defense Media Activity, “Project Maven to Deploy Computer Algorithims to War Zone by Year’s End,” 21 July 2017, https://www.defense.gov/News/Article/Article/1254719/project-maven-to-deploy-computer-algorithms-to-war-zone-by-years-end/

[11] Maj. Gen. Mick Ryan, Defense One, “How to Plan for the Coming Era of Human-Machine Teaming,” 25 April 2018, https://www.defenseone.com/ideas/2018/04/how-plan-coming-era-human-machine-teaming/147718/

[12] Boston Dynamic Big Dog Overview, March, 2010, https://www.youtube.com/watch?v=cNZPRsrwumQ

[13] Richard Priday, Wired, “What’s really going on in those Bostom Dynamics robot videos?,” 18 February 2018, http://www.wired.co.uk/article/boston-dynamics-robotics-roboticist-how-to-watch

divergentoptions.org · by Divergent Options · July 16, 2018

Share this:

No comments:

Post a Comment