6 February 2021

Artificial divide: How Europe and America could clash over AI

Ulrike Esther Franke

Artificial intelligence is a rapidly advancing field that policymakers everywhere are struggling to keep up with.

Calls for international, and particularly transatlantic, cooperation are growing.

In Europe, interest in strengthening “ethical” AI policy is particularly strong – including as a way of making Europe more attractive than other jurisdictions around the world.

Close cooperation between Europe and the US is not a given: Europe sees the US as its main competitor in AI; the US wants to join forces against China on AI, but European interest in such a front is weak.

The non-combat military realm may be a good area for transatlantic AI cooperation.

INTRODUCTION

A glance at the history of artificial intelligence (AI) shows that the field periodically goes through phases of development racing ahead and slowing down – often dubbed “AI springs” and “AI winters”. The world is currently several years into an AI spring, dominated by important advances in machine-learning technologies. In Europe, policymakers’ efforts to grapple with the rapid pace of technological development have gone through several phases over the last five to ten years. The first phase was marked by uncertainty among policymakers over what to make of the rapid and seemingly groundbreaking developments in AI. This phase lasted until around 2018 – though, in some European states, and on some issues, uncertainty remains. The second phase consisted of efforts to frame and AI challenges politically, and to address them, on a domestic level: between 2018 and 2020, no fewer than 21 EU member states published national AI strategies designed to delineate their views and aims, and, in some cases, to outline investment plans.

The next phase could be a period of international, and specifically transatlantic, cooperation on AI. After several years of European states working at full capacity to understand how to support domestic AI research, including by assembling expert teams to deliberate new laws and regulations, there is growing interest among policymakers and experts in looking beyond Europe. On the EU level, AI policy and governance have already received significant attention, with the European Commission playing an important role in incentivising member states to develop AI strategies, such as by starting to tackle issues around how to make sure AI is “ethical” and “trustworthy”. But recent months have seen a rise in the number of calls for international cooperation on AI driven by liberal democracies across the world. Western countries and their allies have set up new forums for cooperation on how to take AI forward, and are activating existing forums. More such organisations and platforms for cooperation are planned.

Calls for cooperation between the United States and Europe have become particularly regular and resonant: following last year’s US presidential election, it was reported that the European Commission planned to propose a “Transatlantic Trade and Technology Council”, which would set joint standards on new technologies. And, in September 2020, the US set up a group of like-minded countries “to provide values-based global leadership in defense for policies and approaches in adopting AI”, which included seven European states, in addition to countries such as Australia, Canada, and South Korea. In June 2020, the Global Partnership on Artificial Intelligence was founded to consider the responsible development of AI; it counts among its members the US, four European states, and the European Union.

This paper examines the reasons European states may want to work with the US on AI, and why the US may want to reach out to Europe on the issue. It also identifies the points of disagreement that may stop the allies from fully fleshing out transatlantic AI cooperation. The paper shows that, while both sides are interested in working together, their rationales for doing so differ. Furthermore, economic and political factors may stand in the way of cooperation, even though such cooperation could have a positive impact on the way AI develops. The paper also argues that transatlantic cooperation in the area of military AI could be a good first step – here, Europe and the US should build on existing collaboration within NATO. The paper concludes with a brief discussion of the different forums that have been created or proposed for transatlantic and broader Western cooperation on AI.

WHY WORK TOGETHER? DISAGREEMENTS AND SHARED GOALS

Experts initially thought of AI as a ‘dual-use technology’, meaning that it can be used in both civilian and military contexts. As AI has advanced, with new uses for it emerging all the time, it has now become more common to speak of AI as an “enabler” or a “general-purpose technology” – as electricity is, for example. AI can improve or enable various capabilities in almost all realms imaginable, from medicine and healthcare to basic research; from logistics and transport to journalism. With this change in understanding has come a realisation on both sides of the Atlantic that AI is likely to have immeasurable consequences for economic development, and will have an impact on social and democratic life, labour markets, industrial development, and more. This also means that policymakers and analysts are increasingly asking questions about how AI could affect the global balance of power.

There are two main rationales for current efforts at transatlantic cooperation on AI. Firstly, among experts and policymakers, there are concerns that AI may be developed and used in ways that are contrary to liberal democratic values and ethics. Secondly, some policymakers fear that AI may give their geopolitical competitors a significant advantage. While the former is the primary reason why many European states want to work with other democratic countries, the latter has played an important role in motivating the US to seek cooperation with Europe and other allies. This was the case even under Donald Trump, who was not known for his appreciation of alliances in general, and Europeans in particular.
The ethics of AI

As AI-enabled systems become ever more widely used, their negative side-effects have become more evident. Concerns have, therefore, emerged that AI itself, or the way it is used, may be unethical. There are three main problem areas: the ethics of AI itself; the context in which AI is used; and the potential for AI to be misused.
Problematic AI

Machine-learning systems are those that use computing power to execute algorithms that learn from data. This means that AI is only as good as the algorithm it uses and the data it is being trained on. If, for example, the data is incomplete or biased, the AI trained on it will be equally biased. AI researchers around the world, and especially researchers from minority groups, have raised the alarm about this particular risk, which has already materialised in several cases. In the US, a risk assessment tool used in Florida’s criminal justice system labelled African-American defendants as “high risk” at nearly twice the rate as white defendants. A hiring algorithm used at Amazon penalised applicants from women’s colleges, while a chatbot trained on Twitter interactions started to post racist tweets. The concern is that real-life data fed into machine-learning systems perpetuate existing human biases, and that – as humans tend to consider computers to be rational – these biases will effectively be sanctioned, thereby entrenching prejudice further in society. Furthermore, AI trained on datasets collected in one cultural context and deployed in another cultural context might effectively enable cultural imperialism. In response to these concerns, big tech firms have developed principles and guidelines, and created research groups and divisions, on ethical AI. More recently, however, scandals have emerged over big tech employees reportedly being forced to leave their jobs for being too critical, heightening concerns that these companies are not taking the issue seriously enough.

Related to concerns about bias are those about the transparency of how AI works. Employing machine-learning methods means that systems are no longer programmed – namely, told what to do by human beings – but instead learn how to behave either by themselves or under human supervision. It is difficult for a human to understand and track how an AI-enabled system has reached a conclusion. This makes it hard to challenge AI-enabled decisions, and to tell whether malicious actors have exploited the vulnerabilities of AI systems.
Problematic context

Even if AI-enabled systems were proven to be perfectly reliable and unbiased, there are contexts in which delegating decisions to machines may be inherently problematic. This includes using AI-enabled systems to make decisions that have fundamental implications for an individual’s life, such as in a judicial or military context. In the military context, lethal autonomous weapon systems able to exert force without meaningful human control or supervision are particularly controversial. The concern is a moral one: should a machine – no matter how intelligent – be allowed to make decisions about the physical wellbeing, or indeed life and death, of a human being? The European Parliament answered this question in the negative, passing a resolution in 2018 that urged the EU and its member states “to work towards the start of international negotiations on a legally binding instrument prohibiting” such weapons.

What is artificial intelligence?

Despite its widespread use, the term AI remains controversial and ill-defined. Broadly speaking, AI refers to efforts to build computers and machines that can perform actions one would expect to require human intelligence, such as reasoning and decision-making. Currently, the most important advances in AI are being made through machine-learning techniques, such as “deep learning” and neural networks, which use computing power to execute algorithms that learn from data. Today’s AI is “narrow” or “weak”, meaning that it is able to learn and do just one task (often at a level above human capabilities). Research is currently taking place into how to build “artificial general intelligence” or “strong AI”, which has the capacity to understand or learn any intellectual task that a human being can.
Misuse of AI

Finally, there may be cases in which an AI-enabled system may be ethically unobjectionable and trustworthy, and the context of its use generally acceptable, but the specific goals for which the system is used are problematic. For example, AI-enabled surveillance might not be problematic per se; and it should be possible to design the underlying technology in a way that does not discriminate. But the specific use of an AI-enabled surveillance system to systematically oppress and exclude members of a minority group would be unethical. Equally, it is ethically problematic to take advantage of AI-enabled capabilities to analyse human behaviour, moods, and beliefs with the aim of influencing people’s behaviour and thoughts – such as during an election, for example. This is true even though the technology itself may have beneficial uses in other settings. In such contexts, many experts have raised concerns over increasingly powerful “deep fake” technologies – AI-enabled ways to create what look like genuine videos of people.

A growing number of AI developers are realising that their work could potentially be misused. In early 2019, California-based research lab OpenAI made the news when it announced it had developed a text-generating model able to write whole essays by itself – but that it would not share the dataset it used for training the algorithm or all of the code it runs on. This was unusual, as most AI research – even that by commercial actors – tends to be carried out openly. The organisation argued that it was worried about the misuse of the tool for disinformation. That said, OpenAI later released the full model, stating that it had seen “no strong evidence of misuse”.

Throughout the Western world and beyond, concerns have been raised about these dangers. Various firms and organisations, such as Google, have published charters and principles for ethical AI. The ethics of AI has become an area of intense academic research, with new institutions founded to study the topic, and calls for AI ethics to be recognised as an academic field comparable to medical ethics.

No actor, however, has so publicly put itself at the forefront of this issue as the EU has. The EU defined the ethical implications of AI as a primary area of interest and work comparatively early on. The European Commission created a “High-Level Expert Group on AI”, which in April 2019 released its Ethics Guidelines for Trustworthy Artificial Intelligence, followed by its Policy and Investment Recommendations for Trustworthy Artificial Intelligence. Ethical AI is not just a concern for EU institutions: every national AI strategy published by member states touches on the topic, and several countries, such as Denmark and Lithuania, identify ethical rules as their first priority.

Ethical AI has also become a subject of debate in the US, although in a less comprehensive way. For example, the US Department of Defense adopted in 2020 a series of ethical principles for the use of AI. Interestingly, the US National Security Commission on Artificial Intelligence speaks in its latest report of ethics primarily in the context of “marshal[ling] Global AI Cooperation”.

Like-minded democratic states share an interest in guaranteeing that AI is developed and used in accordance with liberal democratic values. Both the US and Europe say they want to ensure that everyone can benefit equally from AI, and that competition does not create incentives that lead to an undercutting of standards. However, despite these shared interests, transatlantic cooperation on ethical AI may not be as easy as it first appears.
A Europe-US front on AI against China

International competition on technology, such as 5G, has recently attracted significant attention. At the 2020 Munich Security Conference, for example, tech was an important topic – yet the discussion was not really about tech, but about power, as the rivalry over who builds 5G telecommunications infrastructure turned into a US-Chinese competition. This was despite the fact that the leading 5G providers are European and Chinese.

There is a growing realisation that the adoption of AI-enabled systems may have geopolitical consequences and eventually affect the global balance of power. In particular, AI may give one actor considerable power over others, be it in the form of an economic boost or an AI-enabled military advantage, or through control over crucial technology components and standards.

In the US, there is growing concern over the possibility that China might become too strong an AI actor. The competition over global leadership between the US and China is intensifying, with technology in general, and AI in particular, as battlefields. The US fears that AI may give China a competitive edge. Therefore, countering China’s AI ambitions – as embodied in its attempts to dominate international technology standards bodies, for example – has become an important motive for the US to seek international cooperation. In this context, Joe Biden has proposed an “alliance of liberal democracies” to present an economic and political alternative to China.

European policymakers have been less vocal about the geopolitical consequences of AI. So far, the debate in Europe has primarily revolved around AI’s economic and social effects. Of the 21 strategies on AI either published or drafted by EU member states, very few touch on the geopolitical implications of AI. The notable exception to this is France, whose national AI strategy was clearly drafted with a geopolitical mindset. It warns that France and Europe need to “avoid becoming just ‘digital colonies’ of the Chinese and American giants”. The strategy’s inclusion of “American giants” is telling and important. It shows that, from a European point of view, the US is the primary ‘other’ that Europe measures itself against on technology – at least for now. This is despite the fact that, in recent years, Chinese acquisitions of European high-tech firms have caused significant concern.
Obstacles to cooperation

Both sides of the Atlantic are already motivated to cooperate with each other on AI. But, despite these shared interests, transatlantic cooperation on AI may not be straightforward. Four trends, in particular, could pose problems: transatlantic estrangement; European digital autonomy efforts; differing views on China; and, potentially, Brexit.
Transatlantic estrangement

The transatlantic alliance has had a bad four years. The Trump administration’s criticism of the United Nations and the World Trade Organization, the president’s threats to leave NATO, and his active criticism of the EU all made Europeans wonder whether they had lost their most important partner. Moreover, in light of the conflict over 5G, in the minds of many Europeans, technology in particular has become an area that creates conflict in the transatlantic relationship rather than fostering cooperation.

Although transatlantic relations are likely to improve under Biden, substantial damage has been done, and it will take some time to mend these ties. But, even if relations improve, it is becoming increasingly obvious that US has a diminishing interest in Europe as a geopolitically important part of the world. This trend was already visible under Trump’s predecessor, Barack Obama. It is, therefore, unsurprising that, on technology cooperation, both sides emphasise the importance of working with other actors as well as each other. The US National Security Commission on AI, for example, recommends that the US Departments of State and Defense “should negotiate formal AI cooperation agreements with Australia, India, Japan, New Zealand, South Korea, and Vietnam”. Its March 2020 report emphasises on several occasions the importance of the Five Eyes intelligence alliance. Meanwhile, Europeans are pursuing the idea of an alliance for multilateralism. And, on technology and AI more specifically, they have also begun to reach out to other democratic allies.
European digital autonomy

The most important aspect of transatlantic estrangement, however, is not the loss of trust between the US and Europe – which they will eventually reverse. Rather, during the four years of the Trump administration, and partly in response to isolationist tendencies in the US, Europeans have become much more comfortable talking about European strategic autonomy or sovereignty. Without encouraging the narrative that these efforts are directed against the US, or were primarily an answer to Trump, Europeans aim to empower Europe as an actor in its own right. In the technological realm, this led to the idea of European digital sovereignty, the aim of which is to build up European technological capabilities. Although European digital sovereignty is not specifically targeted at the US, it has led, among other things, to efforts such as the possible regulation of American technology companies and concerns over American firms acquiring European start-ups. European campaigners and some policymakers believe US tech giants such as Google, Apple, Facebook, and Amazon are forces to protect against. European thinking on technology partly developed in opposition to the US and US companies. Thus, European efforts to build up digital sovereignty may impede transatlantic cooperation.

The EU’s effort to strengthen ethical AI, and to make ‘trustworthy AI’ a unique selling point for Europe, might also end up creating problems for transatlantic cooperation. Many EU policymakers believe that the EU’s insistence on ethical AI will eventually become a location advantage for Europe (much like data privacy): as more people become concerned about unethical AI and data security, they will prefer to use or buy AI ‘made in Europe’ rather than elsewhere. In this respect, two European aims are at odds with each other: on the one hand, Europeans want to ensure that AI is developed and used in an ethical way. Partnering with a powerful player such as the US on this matter should be an obvious way to help them achieve this goal. However, if the EU considers ethical AI not just a goal for humanity but a development that may also create commercial advantages for Europe, then transatlantic cooperation on this issue is counterproductive, as it would undermine Europe’s uniqueness.

Finally, many Europeans have expressed scepticism about the extent to which Europe and the US are indeed aligned on ethical AI principles. For example, the Danish national AI strategy argues for a common ethical and human-centred basis for AI. It describes ethical AI as a particularly European approach: “Europe and Denmark should not copy the US or China. Both countries are investing heavily in artificial intelligence, but with little regard for responsibility, ethical principles and privacy.” Many Europeans feel that the US “has no idea how to regulate” cyberspace and continues to show little enthusiasm for doing so. The EU, however, likes to think of itself as a trailblazer when it comes to digital rights, such as the 2014 “right to be forgotten” or the 2018 General Data Protection Regulation.
Differing views on China

As noted, only a few European states look at AI through a geopolitical lens, and EU efforts on this matter focus primarily on strengthening the EU as a global player. This means that the American interest in using transatlantic cooperation as a means to curb Chinese power is likely to have only limited traction in Europe. And US companies, rather than Chinese ones, currently remain the primary ‘other’ for Europe to measure itself against. European regulation efforts still concentrate on US companies rather than Chinese firms. In light of recent changes in language on China in both NATO and the EU, which describe the country as a “strategic competitor” and “systemic rival”, European and American views of China may converge eventually. But, at the moment, Europeans do not feel the same urgency as the US when it comes to pushing back against China. Unfortunately for those in the US who favour greater transatlantic cooperation, the European nation that most often thinks in geopolitical terms, France, is among those most sceptical of the US.
Brexit

Finally, the United Kingdom’s exit from the EU may further complicate transatlantic cooperation on AI. Even if the EU and the UK were to decide to work as closely as possible, the EU would no longer be able to speak as much of Europe as previously. Any transatlantic cooperation on AI will, therefore, require coordination between three, rather than two, actors. Given the UK’s strong technology and AI credentials (AI leader DeepMind is based in London, although it is now owned by Google’s parent company, Alphabet), the country is likely to want to play an important role in any future negotiations on AI standards and use.
FORUMS FOR COOPERATION

AI can enable applications in fields as diverse as health, robotics, defence, and agriculture. Where should the focus of potential transatlantic cooperation lie?

If Europe and the US agree to focus on AI ethics, then they should seek to develop common rules and guidelines that both sides can enforce in their jurisdictions. However, if they agree that their shared goal is to slow down other actors’ – particularly China’s – AI advances, they will need to engage in more targeted forms of cooperation. US researchers have proposed several specific initiatives for international cooperation, such as coordinating investment screening procedures, and establishing common export controls on supply chain components, to ensure China remains dependent on imports of AI chips. This would be in addition to the long line of measures already introduced by the US Department of Commerce. These include requirements for Chinese company Huawei to apply for licences to purchase semiconductors, a measure that aims to exert economic pressure and disrupt Chinese technology supply chains.

As noted, agreeing on shared goals and supporting measures will present some challenges. Beyond the specific themes of ethical AI and slowing Chinese progress in AI, however, there are other areas for transatlantic AI cooperation. Investing in these potentially less controversial areas may help create new platforms and lay important groundwork for greater cooperation. For example, the transatlantic allies should facilitate the exchange of knowledge and best practices on AI, and invest in mutually beneficial research, such as privacy-preserving machine learning.

Defence might also be a promising area for transatlantic cooperation, given the close military ties between the US and Europe through NATO. Military experts are raising concerns over how the introduction of AI onto the battlefield may hinder interoperability between allied forces, so defence could be a good realm in which to strengthen cooperation.

Militaries on both sides of the Atlantic are already investing in AI-enabled capabilities. In military affairs, as in the civilian realm, AI has a variety of uses. Military AI applications include autonomous vehicles and weapons; intelligence, surveillance, and reconnaissance; logistics (for example, the predictive maintenance of military systems such as vehicles and weapons); forecasting; and training (such as that in virtual reality simulations).

Some of these military capabilities – namely, lethal autonomous weapon systems, or “killer robots” – are among the most controversial uses of AI. The US and its European allies have adopted different positions on this issue in international debates such as those at the United Nations in Geneva, where lethal autonomous weapons have been under discussion since 2014. Transatlantic cooperation on lethal autonomous weapons, or other combat-related capabilities, does not, therefore, look promising. However, military AI includes many non-controversial uses, such as ‘sustainment’, which encompasses logistics as well as support activities such as financial management, personnel services, and health care. AI helps make these services more efficient and cost-effective; for example, predictive maintenance helps in monitoring a system, such as an aircraft, and can do things such as use various sensory inputs and data analysis to predict when parts of a system will need to be replaced. Equally, AI can help improve logistics’ efficiency by, for instance, ensuring that supplies are delivered in appropriate quantities and at the right time. Transatlantic cooperation in this field is uncontroversial, but extremely useful – especially when carried out within NATO, as this could help bring allies closer together, establish joint procedures, and thereby ensure interoperability.

No comments: