Pages

11 October 2018

Our Trust Deficit With Artificial Intelligence Has Only Just Started – Analysis

By Eleonore Pauwel*

“We suffer from a bad case of trust-deficit disorder,” said UN Secretary-General António Guterres in his recent General Assembly speech. His diagnosis is right, and his focus on new technological developments underscores their crucial role shaping the future global political order. Indeed, artificial intelligence (AI) is poised to deepen the trust-deficit across the world. The Secretary-General, echoing his recently released Strategy on New Technologies, repeatedly referenced rapidly developing fields of technology in his speech, rightly calling for greater cooperation between countries and among stakeholders, as well as for more diversity in the technology sector. His trust-deficit diagnosis reflects the urgent need to build a new social license and develop incentives to ensure that technological innovation, in particular AI, is deployed safely and aligned with the public interest.

However, AI-driven technologies do not easily fit into today’s models of international cooperation, and will in fact tend to undermine rather than enforce global governance mechanisms. Looking at three trends in AI, the UN faces an enormous set of interrelated challenges.
AI and Reality

First, AI is a potentially dominating technology whose powerful – both positive and negative –implications will be increasingly difficult to isolate and contain. Engineers design learning algorithms with a specific set of predictive and optimizing functions that can be used to both empower or control populations. Without sophisticated fail-safe protocols, the potential for misuse or weaponization of AI is pervasive and can be difficult to anticipate.

Take Deepfake as an example. Sophisticated AI programs can now manipulate sounds, images and videos, creating impersonations that are often impossible to distinguish from the original. Deep-learning algorithms can, with surprising accuracy, read human lips, synthetize speech, and to some extent simulate facial expressions. Once released outside of the lab, such simulations could easily be misused with wide-ranging impacts (indeed, this is already happening at a low level). On the eve of an election, Deepfake videos could falsely portray public officials being involved in money-laundering or human rights abuses; public panic could be sowed by videos warning of non-existent epidemics or cyberattacks; forged incidents could potentially lead to international escalation.

The capacity of a range of actors to influence public opinion with misleading simulations could have powerful long-term implications for the UN’s role in peace and security. By eroding the sense of trust and truth between citizens and the state—and indeed amongst states—truly fake news could be deeply corrosive to our global governance system.
AI Reading Us

Second, AI is already connecting and converging with a range of other technologies—including biotech—with significant implications for global security. AI systems around the world are trained to predict various aspects of our daily lives by making sense of massive data sets, such as cities’ traffic patterns, financial markets, consumer behaviour trend data, health records and even our genomes.

These AI technologies are increasingly able to harness our behavioural and biological data in innovative and often manipulative ways, with implications for all of us. For example, the My Friend Cayla smart doll sends voice and emotion data of the children who play with it to the cloud, which led to a US Federal Trade Commission complaint and its ban in Germany. In the US, emotional analysis is already being used in the courtroom to detect remorse in deposition videos. It could soon be part of job interviews to assess candidates’ responses and their fitness for a job.

The ability of AI to intrude upon—and potentially control—private human behaviour has direct implications for the UN’s human rights agenda. New forms of social and bio-control could in fact require a reimagining of the framework currently in place to monitor and implement the Universal Declaration of Human Rights, and will certainly require the multilateral system to better anticipate and understand this quickly emerging field.
AI as a Conflict Theatre

Finally, the ability of AI-driven technologies to influence large populations is of such immediate and overriding value that it is almost certain to be the theatre for future conflicts. There is a very real prospect of a “cyber race” in which powerful nations and large technology platforms enter into open competition for our collective data as the fuel to generate economic, medical and security supremacy across the globe. Forms of “cyber-colonization” are increasingly likely, as powerful states are able to harness AI and biotech together to understand and potentially control other countries’ populations and ecosystems.
Towards Global Governance of AI

Politically, legally and ethically, our societies are not prepared for the deployment of AI. The UN, established many decades before the emergence of these technologies, is in many ways poorly placed to develop the kind of responsible governance that will channel AI’s potential away from these risks and towards our collective safety and wellbeing. In fact, the resurgence of nationalist agendas across the world may point to a dwindling capacity of the multilateral system to play a meaningful role in the global governance of AI. Major corporations and powerful member states may see little value in bringing multilateral approaches to bear on what they consider lucrative and proprietary technologies.

There are, however, some important ways in which the UN can help build the kind of collaborative, transparent networks that may begin to treat our “trust-deficit disorder.” The Secretary-General’s recently-launched High-Level Panel on Digital Cooperation, is already working to build a collaborative partnership with the private sector and establish a common approach to new technologies. Such an initiative could eventually find ways to reward cooperation over competition, and to put in place common commitments to using AI-driven technologies for the public good.

Perhaps the most important challenge for the UN in this context is one of relevance, of re-establishing a sense of trust in the multilateral system. But if the above trends tell us anything, it is that AI-driven technologies are an issue for every individual and every state, and that without collective, collaborative forms of governance, there is a real risk that it will be a force that undermines global stability.

About the author:
*Eleonore Pauwels is the Research Fellow on Emerging Cybertechnologies at the Centre for Policy Research at United Nations University, focusing on Artificial Intelligence.

Source:

This article was published by Modern Diplomacy

No comments:

Post a Comment