Pages

17 December 2019

Public opinion lessons for AI regulation

Baobao Zhang

This report from The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative is part of “AI Governance,” a series that identifies key governance and norm issues related to AI and proposes policy remedies to address the complex challenges associated with emerging technologies.

An overwhelming majority of the American public believes that artificial intelligence (AI) should be carefully managed. Nevertheless, as the three case studies in this brief show, the public does not agree on the proper regulation of AI applications. Indeed, population-level support of an AI application may belie opposition by some subpopulations. Many AI applications, such as facial recognition technology, could cause disparate harm to already vulnerable subgroups, particularly ethnic minorities and low-income individuals. In addition, partisan divisions are likely to prevent government regulation of AI applications that could be used to influence electoral politics. In particular, the regulation of content recommendation algorithms used by social media platforms has been highly contestable. Finally, mobilizing an influential group of political actors, such as machine learning researchers in the campaign against lethal autonomous weapons, may be more effective in shifting policy debates than mobilizing the public at large.


PUBLIC OPINION ON AI

AI is a general-purpose technology that enables machines to perform tasks that previously required human intelligence. As a result, it has a wide range of commercial and security applications. A 2018 survey conducted by the Center for the Governance of AI found that 84% of the American public believes that AI is a technology that should be carefully managed. Furthermore, the survey suggests that Americans consider most AI governance challenges to have high issue importance, as seen in Figure 1 below.

This brief focuses on how public opinion will likely shape the regulation of three applications of AI in the U.S.: facial recognition technology used by law enforcement, algorithms used by social media platforms, and lethal autonomous weapons. These case studies were selected because they involve AI governance issues that the American public characterize as either highly likely to impact them in the next decade or important for tech companies and governments to manage. Political debates around these applications touch on central themes articulated in numerous AI ethics principles, including fairness, privacy, and safety. As shown in the figure below, Americans predict some of these governance challenges as more likely to impact Americans in the next decade than others. The issues thought to be the most likely to impact Americans and rated the highest in issue importance include preventing AI-assisted surveillance from violating privacy and civil liberties, preventing AI from being used to spread fake and harmful content online, preventing AI cyberattacks, and protecting data privacy.

LAW ENFORCEMENT’S USE OF FACIAL RECOGNITION TECHNOLOGY

Facial recognition algorithms use facial features to identify, verify, and classify persons. The technology has widespread consumer applications, such as categorizing photos and unlocking smartphones. Law enforcement agencies have also begun to use facial recognition technology to scan through driver’s license photos and mug shots. The U.S. Customs and Border Protection is using facial recognition technology to screen international passengers at major airports.

Civil rights groups and academic researchers have criticized law enforcement’s adoption of facial recognition technology by citing concerns of racial and gender bias as well as a violation of civil rights. Researchers found that leading commercial facial recognition software programs are much less accurate at identifying women, particularly those with darker skin, than white men. Even if facial recognition algorithms were to become more accurate, critics contend that the technology harms the American public by increasing state surveillance capabilities, which could be used to target marginalized groups.

Criticism of law enforcement’s use of facial recognition has resulted in governmental and industry regulations. San Francisco, Oakland, and Berkeley, California, and Somerville, Massachusetts, have banned the use of the technology by the police. California passed a three-year moratorium on the use of facial recognition technology on police body-worn cameras. Massachusetts lawmakers are debating a similar mortarium on government use of facial recognition and related biometric technology until regulations could be established. Axon, a major tech supplier to law enforcement in the U.S., has halted deploying facial recognition on its body-worn cameras upon the recommendation of its AI ethics board.

Nationwide, a majority of adults support law enforcement’s use of facial recognition, but support is lower among certain groups. Regulation is possible in cities and states where there is strong opposition to police using the technology. As an illustration, 56% of American adults trust law enforcement using facial recognition technology, according to a recent survey by the Pew Research Center. Nevertheless, support is lower among those aged 18 to 29, Black Americans, and those who identify as Democrats or lean Democrat, as seen in Figure 1. Importantly, these demographic subgroups also have lower trust in law enforcement in general.

“Nationwide, a majority of adults support law enforcement’s use of facial recognition, but … support is lower among those aged 18 to 29, Black Americans, and those who identify as Democrats.”

Although there exist little sub-national public opinion survey data on support for facial recognition technology, there are likely geographic differences in policy preferences. For instance, 79% of Massachusetts voters support a moratorium on government use of facial recognition surveillance, according to a poll sponsored by the American Civil Liberties Union (ACLU) of Massachusetts. As criticism of the technology becomes more politically salient, public opinion could change. Opposition to the use of facial recognition software has increased by 16 percentage points between 2018 and 2019, according to data from Morning Consult.

Several bills have been introduced in Congress to regulate facial recognition and other biometric technologies, but none has seen any movement. Relatively high national-level support of law enforcement’s use of facial recognition technology will likely make federal regulation difficult. Nevertheless, as legislations or potential legislations in California and Massachusetts show, regulation is possible in localities where there is more widespread public opposition. Civil society groups, like the ACLU, will continue to play a significant role in pushing for a moratorium or bans by governments and industry.

Figure 2: Trust of law enforcement agencies using facial recognition technology responsibly
Question: How much, if at all, do you trust [law enforcement agencies] to use facial recognition technology responsibly?
All adults18-2930-4950-6465+WhiteBlackHispanicDem/Lean DemRep/Lean RepDemographic group10%20%30%40%50%60%0%70%A great dealSomewhat

ALGORITHMIC ACCOUNTABILITY BY SOCIAL MEDIA PLATFORMS

Algorithms used to recommend content on social media have increasingly come under scrutiny. Platforms like Facebook, Twitter, and YouTube use machine learning to suggest media content and advertisements that optimize for user engagement. Civil society groups and researchers have expressed concerns that these algorithms help spread misinformation, proliferate digital propaganda, create partisan echo chambers, and promote violent extremism.

Unlike facial recognition technology that could more readily be managed at the local or state level, social media platforms have users across the U.S. and around the world. California has sought to regulate online bots and protect consumer privacy, setting the terms of the debate for federal regulations. However, it remains to be seen if the federal government will follow the way of California or oppose the state’s policies.

In the U.S., legislation to regulate social media platforms has stalled because of the divergent policy priorities of the two parties. In light of Russian intervention in the 2016 U.S. presidential election and the Cambridge Analytica scandal, Congress has held several hearings to investigate and castigate tech companies. The techlash from the left and the right are different: Democrats prioritize the prevention of digital manipulation and consumer privacy while Republicans focus on alleged bias against conservatives. Democratic senators have introduced legislation that would increase transparency for online campaign advertisements and require tech companies to safeguard users’ sensitive data. Republican lawmakers have accused social media platforms of censoring conservative viewpoints, despite evidence to the contrary. The Trump White House is reportedly drafting an executive order to combat this alleged bias.

“The techlash from the left and the right are different: Democrats prioritize the prevention of digital manipulation and consumer privacy while Republicans focus on alleged bias against conservatives.”

The American public is concerned about the lack of accountability by tech companies that operate social media platforms, but does not agree on policy solutions. According to a report from the Pew Research Center, 51% of the U.S. public thinks that tech companies should be regulated more than they are now. At the same time, Americans indicate that they have greater trust in tech companies than the U.S. federal government to manage the development and use of AI in the best interests of the public, per data from the Center for the Governance of AI. The public is evenly split on whether there should be regulation of content recommendation algorithms based on political affiliations or political viewpoints, according to a 2017 Harvard/Harris Poll survey.

The partisan division among lawmakers in Congress is also reflected in public opinion. There exists a stark asymmetry in how Republicans and Democrats perceive bias in social media platforms and tech companies. While 54% of Republicans think it is very likely that social media platforms censor political viewpoints, only 20% of Democrats do, according to a survey from the Pew Research Center. Conversely, while 53% of Democrats think major tech companies support the views of liberals and conservatives equally, only 28% of Republicans feel the same way.

With political gridlock hindering governmental regulation of algorithms used by social media platforms, tech companies have tried to answer their critics through industry self-regulation. Nevertheless, industry self-regulation is not immune to underlying conflicts in American politics. For instance, Google’s AI ethics board dissolved after the company’s employees and outside civil society groups protested the inclusion of Heritage Foundation president Kay Coles James and drone company CEO Dyan Gibbens on the board.
LETHAL AUTONOMOUS WEAPONS

Lethal autonomous weapon systems can identify and engage targets without human intervention; the technology is currently being developed but has not yet been deployed on battlefields. A coalition of nongovernmental organizations, the Campaign to Stop Killer Robots, has advocated for an international ban on fully autonomous weapons. Critics of the technology argue that lethal autonomous weapons are unethical and unsafe. Some suggest that an arms race to develop the technology would exacerbate tensions between major powers; at the same time, non-state actors, including terrorists, could adopt the technology for malicious uses. Twenty-six countries have publicly expressed support for a pre-emptive international ban on fully autonomous lethal weapons; the U.S., along with several major powers (e.g., Russia, the U.K., Israel) currently opposes such a ban.

Figure 3: Attitudes toward lethal autonomous weapons by country
Question: How do you feel about the use of such lethal autonomous weapons systems in war?
IsraelBrazilJapanUnited StatesGreat BritainRussiaFranceChinaGermanySouth KoreaCountry0%10%20%30%40%50%60%70%80%Percent0%80%Strongly/Somewhat supportSomewhat/Strongly oppose

While only a slim majority of Americans oppose the use of fully autonomous weapons systems in wars, according to a 2019 Ipsos/Human Rights Watch survey, opposition grew by 7 percentage points since 2018. The U.S. public’s opposition is slightly lower than the opposition by the public in other countries of the UN Security Council, as seen in Figure 3. Research shows that political messaging can affect American attitudes toward lethal autonomous weapons. For instance, support for autonomous weapons increases when the public is told that other states or non-state actors are developing them. On the other hand, explaining the security risks of a military AI arms race can reduce support for investment in military applications of AI.

The Campaign to Stop Killer Robots has adopted some of the publicity strategies that the International Campaign to Ban Landmines used to successfully lobby for an international ban on landmines. But researchers point out that persuading the public to oppose lethal autonomous weapons could be more challenging than convincing them to oppose landmines; the former has yet to cause any causalities while the latter has led to gruesome deaths and injuries.

Although persuading the public can be difficult, critics of lethal autonomous weapons have already convinced other important political actors—machine learning researchers and the heads of some tech companies—to support their message. This group may have a unique advantage in shaping the ban, or at least the regulation of lethal autonomous weapons, because they are building the key components of the technology. Many leading machine learning researchers, as well as SpaceX and Tesla founder Elon Musk and Google DeepMind co-founders Demis Hassabis, Shane Legg, and Mustafa Suleyman, have signed a pledge calling for an international ban of lethal autonomous weapons. The outcry by Google engineers has led to the company’s canceling of a contract with the Pentagon’s Project Maven, which uses AI to interpret video images and could be used to improve the precision of drone strikes.
CONCLUSION

AI is a fast-moving technology that is continually producing new governance challenges. The policy debates described in these case studies have already evolved in recent months. More than 400 police agencies have begun partnerships with Ring, an Amazon-owned doorbell-camera firm, that could expand law enforcement’s surveillance capabilities. Experts and the public have expressed concerns that AI-generated “deepfake” videos could worsen misinformation and increase harmful content on social media. International regulation of lethal autonomous weapons has stalled while the U.S. military aims to increase investment in AI. Advocacy groups for ethical uses of AI are likely disappointed by the slow pace of federal regulations. Nevertheless, educating and mobilizing the public or other relevant political actors could bring about legislation at the local or state-level or self-regulation by tech companies.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Microsoft provides support to The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative, and Amazon, Facebook and Google provide general, unrestricted support to the Institution. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.

No comments:

Post a Comment