15 December 2020

CO20200 | AI Governance and Military Affairs – Artificial Intelligence and Arms Control: What it Means for Singapore


Andrea Gilli, Mauro Gilli

RSIS Commentary is a platform to provide timely and, where appropriate, policy-relevant commentary and analysis of topical and contemporary issues. The authors’ views are their own and do not represent the official position of the S. Rajaratnam School of International Studies, NTU. These commentaries may be reproduced with prior permission from RSIS and due recognition to the author(s) and RSIS. Please email to Mr Yang Razali Kassim, Editor RSIS Commentary at RSISPublications@ntu.edu.sg.

SYNOPSIS

Arms control can be pursued and applied to the realm of emerging technologies, but it must be narrow in focus to ensure effectiveness and avoid unintended consequences – like hampering relations with the private industry. What are the implications for Singapore?

MASSIVE AND rapid progress in so-called emerging and disruptive technologies such as artificial intelligence is generating global concerns because, while countries invest in R&D, their armed forces aim to integrate such technologies in their force structures.

As individual services experiment with the new technologies and develop new operational concepts and warfighting doctrines, new questions on the future of arms control emerge. Can arms control play a role in the governance of artificial intelligence? If so, how?
Killer Robots, Arms Control and Artificial Intelligence

The issue of arms control and artificial intelligence, especially in Europe, is often linked to the decade-old campaign to ban lethal autonomous weapons (LAWs) or so-called killer robots. While progress on this front has been limited and a preventive ban, as the campaign aims to achieve, seems unlikely at this stage, arms control can still provide a significant contribution.

However, arms control must be entrenched into a strategic logic: as a means, it must serve some goals. The challenge is thus squaring the circle, having countries with different goals agreeing on the means.

For instance, do countries aim to achieve strategic stability, i.e., reduce the risks of unintended escalations? Or do some of them want to preserve their competitive advantage, e.g., withhold some critical technologies from adversaries?

Self-evidently, these two options are at odds, and reconciling them is not easy at all: it was possible during the Cold War, but it is not given that it will happen again. Indeed, the multilateral arms control architecture is weakening. Alternatively, countries may pursue non-proliferation to create a club and regulate access to some critical technologies, in part like the Non-Proliferation Treaty (NPT).
The Three AI Pillars & Arms Control

To understand how arms control can contribute to artificial intelligence, it is imperative to understand the relationship between its three elements: algorithms, data and hardware.

Algorithms

There are fundamentally two types of algorithms: Generic, developed by the commercial world, which serves multiple purposes and can be reprogrammed.

Realistically, trying to regulate their diffusion would be ineffective let alone inefficient as this would negatively affect the workforce in the private industry, its incentives to invest as well as government-industry relations.

Specific algorithms are designed for targeted missions, in the military domain, but also in the private industry, like in advanced manufacturing processes – such as semiconductors. Regulating and controlling their diffusion is easier and more effective. On the one hand, in some countries, their sales are already under government supervision (like the Commerce Department’s Munition List in the US).

On the other, since these algorithms are much more challenging to develop, few actors control this technology, and thus they represent critical chokepoints in the supply chain, which countries can strategically exploit for arms control.

Data

If data is the new oil, it is useless without drilling plans (sensors) refineries (data scientists), and pipelines (data infrastructures). Unsurprisingly, there is not a single case of people, or countries, becoming rich thanks to raw data. From an arms control perspective, not all data is the same.

Fundamentally, there are two types of data. Training datasets, on which algorithms are perfected, and data analytics or the data generated during operations that the algorithms exploit to improve performance. The latter type of data is not really an arms control issue but relates more to privacy and security issues. Training datasets can, in contrast, be subjected to arms control agreements.

Like algorithms, general training datasets are difficult to monitor and thus any technology control regime is difficult to enforce. Highly specific training datasets are easier to monitor and, in some cases, already subjected to export restrictions.

The current artificial intelligence paradigm is built on large datasets, mostly because data is cheap. However, should an industrial inflection point occur, in which software accuracy becomes a dominant standard, the role and importance of data would progressively evaporate.

Hardware

The last pillar of artificial intelligence concerns computing power, which mostly translates into semiconductors. In this respect, three considerations are in order. Because of the 1965 Moore’s Law, the power of microprocessors has increased continuously over the past decades, doubling every 12 months.

Simultaneously, however, because of the Second Moore’s Law, the cost of semiconductor production plants has also constantly increased, doubling every four years and thus leading to an extremely concentrated industry, which has also adopted global structure.

This trend has several implications for arms control. Few companies produce leading semiconductor manufacturing equipment (SME). Technology control agreements can thus be put in place in this realm. Not all countries can produce the most advanced semiconductors. The advantages of export control regimes depend in this case, however, on the relative performance curves of industry leaders and followers.

Some believe tight control regimes will undermine the oligopolistic structure of the industry, while others are more sceptical. In the background, there is the question of the end of Moore’s Law. As artificial intelligence requires more and more parallelised chips to conduct multiple operations, it is likely that for many applications, they will be too expensive, thus leading to a dramatic change of the industry – whose consequences are difficult to anticipate.
What It Means for Singapore

The AI-centered technological revolution we are witnessing, moving away from some traditional features of the industrial age – such as economies of scale – is probably going to strengthen, at the global level, small, knowledge-intensive countries.

Singapore is a leading hub for technological innovation and technological adoption. The country has an established reputation in these domains. Geopolitics is not going to disappear overnight, however.

For a country like Singapore, similar to Switzerland in Europe, a strong multilateral system of arms control and technology export regime could probably be of interest – as the country has much to gain from international stability and global trade.

As discussions grow in different corners of the world about what type of arms control regime to pursue, and what countries to include, Singapore has a strong interest at stake as a producer of some key advanced technologies.
About the Authors

Andrea Gilli is Senior Researcher at the NATO Defence College and Affiliate at the Center for International Security and Cooperation. Mauro Gilli is Senior Researcher at the Center for Security Studies of ETH-Zurich. The views expressed are the authors’ only and do not reflect the official positions of NATO, the NATO Defence College or any other organisation they are or have been associated with. They contributed this commentary as part of a series in collaboration with the RSIS Military Transformations Programme.

No comments: