Pages

11 September 2022

State of AI: Artificial Intelligence, the Military and Increasingly Autonomous Weapons

Kirsten Gronlund

As artificial intelligence works its way into industries like healthcare and finance, governments around the world are increasingly investing in another of its applications: autonomous weapons systems. Many are already developing programs and technologies that they hope will give them an edge over their adversaries, creating mounting pressure for others to follow suite.

These investments appear to mark the early stages of an AI arms race. Much like the nuclear arms race of the 20th century, this type of military escalation poses a threat to all humanity and is ultimately unwinnable. It incentivizes speed over safety and ethics in developing new technologies, and as these technologies proliferate, it offers no long-term advantage to any player.

Nevertheless, the development of military AI is accelerating. Below are the current AI arms programs, policies, and positions of seven key players: the United States, China, Russia, the United Kingdom, France, Israel, and South Korea. All information is from State of AI: Artificial intelligence, the military, and increasingly autonomous weapons, a report by Pax.

“PAX calls on states to develop a legally binding instrument that ensures meaningful human control over weapons systems, as soon as possible,” says Daan Kayser, the report’s lead author. “Scientists and tech companies are also responsible for preventing these weapons from becoming reality. We all have a role to play in stopping the development of Killer Robots.”

The United States

UN Position

In April 2018, the US underlined the need to develop “a shared understanding of the risk and benefits of this technology before deciding on a specific policy response. We remain convinced that it is premature to embark on negotiating any particular legal or political instrument in 2019.”

AI in the MilitaryIn 2014, the Department of Defense released its ‘Third Offset Strategy,’ the aim of which, as described in 2016 by then-Deputy Secretary of Defense “is to exploit all advances in artificial intelligence and autonomy and insert them into DoD’s battle networks (…).”

The 2016 report ‘Preparing for the Future of AI’ also refers to the weaponization of AI and notably states: “Given advances in military technology and AI more broadly, scientists, strategists, and military experts all agree that the future of LAWS is difficult to predict and the pace of change is rapid.”

In September 2018, the Pentagon committed to spend USD 2 billion over the next five years through the Defense Advanced Research Projects Agency (DARPA) to “develop next wave of AI technologies.”

The Advanced Targeting and Lethality Automated System (ATLAS) program, a branch of DARPA, “will use artificial intelligence and machine learning to give ground-combat vehicles autonomous target capabilities.”

Cooperation with the Private SectorEstablishing collaboration with private companies can be challenging, as the widely publicized case of Google and Project Maven has shown: Following protests from Google employees, Google stated that it would not renew its contract. Nevertheless, other tech companies such as Clarifai, Amazon and Microsoft still collaborate with the Pentagon on this project.

The Project Maven controversy deepened the gap between the AI community and the Pentagon. The government has developed two new initiatives to help bridge this gap.

DARPA’s OFFSET program, which has the aim of “using swarms comprising upwards of 250 unmanned aircraft systems (UASs) and/or unmanned ground systems (UGSs) to accomplish diverse missions in complex urban environments,” is being developed in collaboration with a number of universities and start-ups.

DARPA’s Squad X Experimentation Program, which aims for human fighters to “have a greater sense of confidence in their autonomous partners, as well as a better understanding of how the autonomous systems would likely act on the battlefield,” is being developed in collaboration with Lockheed Martin Missiles.

China

UN Position

China demonstrated the “desire to negotiate and conclude” a new protocol “to ban the use of fully

autonomous lethal weapons systems.” However, China does not want to ban the development of these

weapons, which has raised questions about its exact position.

AI in the MilitaryThere have been calls from within the Chinese government to avoid an AI arms race. The sentiment is echoed in the private sector, where the chairman of Alibaba has said that new technology, including machine learning and artificial intelligence, could lead to a World War III.

Despite these concerns, China’s leadership is continuing to pursue the use of AI for military purposes.

Cooperation with the Private SectorTo advance military innovation, President Xi Jinping has called for China to follow “the road of military-civil fusion-style innovation,” such that military innovation is integrated into China’s national innovation system. This fusion has been elevated to the level of a national strategy.

The People’s Liberation Army (PLA) relies heavily on tech firms and innovative start-ups. The larger AI research organizations in China can be found within the private sector.

There are a growing number of collaborations between defense and academic institutions in China. For instance, Tsinghua University launched the Military-Civil Fusion National Defense Peak Technologies Laboratory to create “a platform for the pursuit of dual-use applications of emerging technologies, particularly artificial intelligence.”

Regarding the application of artificial intelligence to weapons, China is currently developing “next generation stealth drones,” including, for instance, Ziyan’s Blowfish A2 model. According to the company, this model “autonomously performs more complex combat missions, including fixed-point timing detection, fixed-range reconnaissance, and targeted precision strikes.”

Russia

UN Position

Russia has stated that the debate around lethal autonomous weapons should not ignore their potential benefits, adding that “the concerns regarding LAWS can be addressed through faithful implementation of the existing international legal norms.” Russia has actively tried to limit the number of days allotted for such discussions at the UN.

AI in the MilitaryWhile Russia does not have a military-only AI strategy yet, it is clearly working towards integrating AI more comprehensively.

The Foundation for Advanced Research Projects (the Foundation), which can be seen as the Russian equivalent of DARPA, opened the National Center for the Development of Technology and Basic Elements of Robotics in 2015.

At a conference on AI in March 2018, Defense Minister Shoigu pushed for increasing cooperation between military and civilian scientists in developing AI technology, which he stated was crucial for countering “possible threats to Russia's technological and economic security.”

In January 2019, reports emerged that Russia was developing an autonomous drone, which “will be able to take off, accomplish its mission, and land without human interference,” though “weapons use will require human approval.”

Cooperation with the Private SectorA new city named Era, devoted entirely to military innovation, is currently under construction. According to the Kremlin, the “main goal of the research and development planned for the technopolis is the creation of military artificial intelligence systems and supporting technologies.”

In 2017, Kalashnikov — Russia’s largest gun manufacturer — announced that it had developed a fully automated combat module based on neural-network technologies that enable it to identify targets and make decisions.

The United Kingdom

UN Position

The UK believes that an “autonomous system is capable of understanding higher level intent and direction.” It suggested that autonomy “confers significant advantages and has existed in weapons systems for decades” and that “evolving human/machine interfaces will allow us to carry out military functions with greater precision and efficiency,” though it added that “the application of lethal force must be directed by a human, and that a human will always be accountable for the decision.” The UK stated that “the current lack of consensus on key themes counts against any legal prohibition,” and that it “would not have any practical effect.”

AI in the MilitaryA 2018 Ministry of Defense report underlines that the MoD is pursuing modernization “in areas like artificial

intelligence, machine-learning, man-machine teaming, and automation to deliver the disruptive

effects we need in this regard.”

The MoD has various programs related to AI and autonomy, including the Autonomy program. Activities in this program include algorithm development, artificial intelligence, machine learning, “developing underpinning technologies to enable next-generation autonomous military-systems,” and optimization of human autonomy teaming.

The Defense Science and Technology Laboratory (Dstl), the MoD’s research arm, launched the AI Lab in 2018.

The best-known example of autonomous technology currently under development in weaponry is the top-secret Taranis armed drone, the “most technically advanced demonstration aircraft ever built in the UK,” according to the MoD.

Cooperation with the Private SectorThe MoD has a cross-government organization called the Defense and Security Accelerator (DASA), launched in December 2016. DASA “finds and funds exploitable innovation to support UK defense and security quickly and effectively, and support UK property.”

In March 2019, DASA awarded a GBP 2.5 million contract to Blue Bear Systems, as part of the Many Drones Make Light Work project. On this, the director of Blue Bear Systems said, “The ability to deploy a swarm of low cost autonomous systems delivers a new paradigm for battlefield operations.”

France

UN Position

France understands the autonomy of LAWS as total, with no form of human supervision from the moment of activation and no subordination to a chain of command. France stated that a legally binding instrument on the issue would not be appropriate, describing it as neither realistic nor desirable. France proposed a political declaration that would reaffirm fundamental principles and “underline the need to maintain human control over the ultimate decision of the use of lethal force.”

AI in the MilitaryFrance’s national AI strategy is detailed in the 2018 Villani Report, which states that “the increasing use of AI in some sensitive areas such as in Defense (with the question of autonomous weapons) raises a real society-wide debate and implies an analysis of the issue of human responsibility.”

This has been echoed by French Minister for the Armed Forces, Florence Parly, who said that “giving a machine the choice to fire or the decision over life and death is out of the question.”

On defense and security, the Villani Report states that the use of AI will be a necessity in the future to ensure security missions, to maintain power over potential opponents, and to maintain France’s position relative to its allies.

The Villani Report refers to DARPA as a model, though not with the aim of replicating it. However, the report states that some of DARPA’s methods “should inspire us nonetheless. In particular as regards the President’s wish to set up a European Agency for Disruptive Innovation, enabling funding of emerging technologies and sciences, including AI.”

The Villani Report emphasizes the creation of a “civil-military complex of technological innovation, focused on digital technology and more specifically on artificial intelligence.”

Cooperation with the Private SectorIn September 2018, the Defense Innovation Agency (DIA) was created as part of the Direction Générale de l’Armement (DGA), France’s arms procurement and technology agency. According to Parly, the new agency “will bring together all the actors of the ministry and all the programs that contribute to defense innovation.”

One of the most advanced projects currently underway is the nEUROn unmanned combat air system, developed by French arms producers Dassault on behalf of the DGA, which can fly autonomously for over three hours.

Patrice Caine, CEO of Thales, one of France’s largest arms producers, stated in January 2019 that Thales will never pursue “autonomous killing machines,” and is working on a charter of ethics related to AI.

Israel

UN Position

In 2018, Israel stated that the “development of rigid standards or imposing prohibitions to something that is so speculative at this early stage, would be imprudent and may yield an uninformed, misguided result.” Israel underlined that “e should also be aware of the military and humanitarian advantages.”

AI in the MilitaryIt is expected that Israeli use of AI tools in the military will increase rapidly in the near future.

The main technical unit of the Israeli Defense Forces (IDF) and the engine behind most of its AI developments is called C4i. Within C4i is the Sigma branch, whose “purpose is to develop, research, and implement the latest in artificial intelligence and advanced software research in order to keep the IDF up to date.”

The Israeli military deploys weapons with a considerable degree of autonomy. One of the most relevant examples is the Harpy loitering munition, also known as a kamikaze drone: an unmanned aerial vehicle that can fly around for a significant length of time to engage ground targets with an explosive warhead.

Israel was one of the first countries to “reveal that it has deployed fully automated robots: self-driving military vehicles to patrol the border with the Palestinian-governed Gaza Strip.”

Cooperation with the Private SectorPublic-private partnerships are common in the development of Israel’s military technology. There is a “close connection between the Israeli military and the digital sector,” which is said to be one of the reasons for the country’s AI leadership.

Israel Aerospace Industries, one of Israel’s largest arms companies, has long been been developing increasingly autonomous weapons, including the above mentioned Harpy.

South Korea

UN Position

In 2015, South Korea stated that “the discussions on LAWS should not be carried out in a way that can hamper research and development of robotic technology for civilian use,” but that it is “wary of fully autonomous weapons systems that remove meaningful human control from the operation loop, due to the risk of malfunctioning, potential accountability gap and ethical concerns.” In 2018, it raised concerns about limiting civilian applications as well as the positive defense uses of autonomous weapons.

AI in the MilitaryIn December 2018, the South Korean Army announced the launch of a research institute focusing on artificial intelligence, entitled the AI Research and Development Center. The aim is to capitalize on cutting-edge technologies for future combat operations and “turn it into the military’s next-generation combat control tower.”

South Korea is developing new military units, including the Dronebot Jeontudan (“Warrior”) unit, intending to develop and deploy unmanned platforms that incorporate advanced autonomy and other cutting-edge capabilities.

South Korea is known to have used the armed SGR-A1 sentry robot, which has operated in the demilitarized zone separating North and South Korea. The robot has both a supervised mode and an unsupervised mode. In the unsupervised mode “the SGR-AI identifies and tracks intruders , eventually firing at them without any further intervention by human operators.”

Cooperation with the Private SectorPublic-private cooperation is an integral part of the military strategy: the AI Research and Development Center plan is “to build a network of collaboration with local universities and research entities such as the KAIST and the Agency for Defense Development.”

In September 2018, South Korea’s Defense Acquisition Program Administration (DAPA) launched a new strategy to develop its national military-industrial base, with an emphasis on boosting ‘Industry 4.0

technologies’, such as artificial intelligence, big data analytics and robotics.

No comments:

Post a Comment