Pages

29 January 2020

The case for AI transparency requirements

Alex Engler

This report from The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative is part of “AI Governance,” a series that identifies key governance and norm issues related to AI and proposes policy remedies to address the complex challenges associated with emerging technologies.

ANONYMOUS AI IS COMING

We are nearing a new age of ubiquitous AI. Between our smartphones, computers, cars, homes, the internet of things, and social media, the average citizen of developed countries might interact with some sort of AI system dozens of times every day. Many of these applications will be helpful, too. Everyone will welcome better automated customer service, where you can get accurate and thorough answers to questions about complex topics. For example, soon, it might be easy for AI to convey what your health insurance actually covers, based on a database of similar prior claims.


For most of these interactions, it will be obviously and intentionally clear that the text you read, voice you hear, or face you see is not a real person. However, other times it will not be so obvious. As AI technologies quickly and methodically climb out of the uncanny valley, customer service calls, website chatbots, and interactions on social media and in virtual reality may become progressively less evidently artificial.

TURING TESTS WILL FAIL THE TEST OF TIME

We have now eclipsed the year in which the dystopian 1982 sci-fi movie “Blade Runner” is set, 2019. In a pertinent scene, the protagonist, played by Harrison Ford, performs an elaborate Turing test on an apparently human character, Rachael. Over several hours and more than one hundred questions, he finally determines that Rachael is an AI—albeit an impressively lifelike one. While the perfectly humanoid form of Rachael far outshoots modern technology, there is still a prophetic aspect to this interaction. The problem that we face is not that AI is so convincing that a committed person cannot identify it as synthetic. Instead, the problem is that the time required to do so is prohibitive.

“The problem that we face is not that AI is so convincing that a committed person cannot identify it as synthetic. Instead, the problem is that the time required to do so is prohibitive.”

Currently, a reasonably motivated person may not have that much trouble uncovering an AI imposter in conversation—though only half of Americans think they could identify a bot on social media.[1] Even the best natural language processing (NLP) models will struggle with repetition, display irregularities in coherence across sentences, and have problems with idioms and grammar. However, these AI models are getting better very quickly. A series of cutting-edge models released by research labs like the Allen Institute for AI, Google AI, and OpenAI are pushing the state of the field. They can write graphic stories[2] and assist in formulaic news reporting like that found in sports and finance coverage.[3] They are also starting to learn to debate competently.[4] The Duke Reporters Lab is even hopeful that AI can fact-check in real time[5]—a critical step to acting believably human. While many of their flaws will persist, AI systems do not have to be perfect to be convincing. They just need to be good enough to take advantage of a busy and distracted person.

In some circumstances, these models can be so convincing that they have sparked debate about how, and even if, they should be released. Unlike in much of academia, releasing a model means more than just making public a methodology and research paper. These NLP models are trained on huge numbers of documents (think all of Wikipedia and then some) using tremendous computing power. Then they are released with that entire interpretation of a language saved within. This means that when a new NLP model is made public, any semi-competent developer can adopt it directly for their specific purpose without the originating dataset or expensive computing hardware.

This is a boon for the applications that benefit society (e.g., automated closed-captioning and language translation), as those efforts stand to gain from the latest technology. It also informs as to why the errors and biases of these models can be so important, as they will perpetrate through downstream applications.[6] Of course, these models can also be used for ill. Researchers at Middlebury College demonstrated that OpenAI’s newest model, GPT-2, can generate propaganda at scale for extremist views like white supremacy and jihadism.[7]
THE COST OF ANONYMOUS AI IN COMMERCE

Unfortunately, corporations are going to find many reasons to duplicitously present their software as real people. They will decide that their products, advertisements, or sales efforts will garner more time and attention if they are perceived as coming from a person. Take, for example, Google’s Duplex, an AI-powered phone assistant that can conduct conversations to carry out tasks like making a restaurant reservation. In launching Duplex, Google opted for a system as realistic to a human voice as possible—one that clearly eschewed disclosing itself as software—and only reversed course under public pressure. Many less-prominent companies will not be subject to the same criticism, or they may choose to ignore it.

A recent randomized experiment showed that when chatbots did not disclose their automated nature, they outperformed inexperienced salespersons.[8] But when the chatbot revealed itself, its sales performance dropped by 80%. Regardless of whether future research corroborates this finding, even a small amount of evidence that anonymity helps sales may convince many companies not to disclose their AI.

“[E]ven a small amount of evidence that anonymity helps sales may convince many companies not to disclose their AI.”

Companies also use targeted sponsorships of social media personalities, especially apparent on Instagram, to sell their brands. There is already a small cottage industry creating celebrity chatbots that are intended to mirror the writing style of a famous personality.[9] As these improve, would it be surprising to see automated interactions between internet stars and their fans with the intention of driving sales of pseudoscientific health supplements and expensive athleisure brands? It is probably happening already.

One might enjoy ordering Domino’s Pizza from its chatbot, Dom,[10] or getting help choosing a seasonal flannel from American Eagle’s chatbot,[11] but what happens when it’s not clear if financial advice is coming from a robot? It is naïve to think these systems are exclusively designed to inform customers—they are intended to change behavior, especially toward spending more money. As the technology improves and the datasets expand, they will only get more effective to this end. Customers should, at the very least, have the right to know that this influence is coming from an automated system.

The same concerns arise in other sectors too, such as with telemedicine software.[12] Recent research from Brookings’s Metropolitan Policy Program makes it clear that AI is going to expand into many aspects of our society and economy. The consequences of anonymous AI might be quite extensive.
THE COST OF ANONYMOUS AI IN POLITICS

In 2017, the public comment period for the Federal Communications Commission’s (FCC) proposed reversal of net neutrality received more than 24 million comments. This smashed the previous record of 4 million comments and crashed the FCC’s website. While the level of public interest in this rule was quite high, millions of these comments turned out to be automatically generated with fake or stolen personal information—sometimes using the names of the deceased. Buzzfeed News later discovered that the industry group Broadband for America was behind the fraudulent campaigns, creating millions of comments against net neutrality.[13] Independent studies, however, suggest that the overwhelming majority of real comments were in favor of preserving net neutrality.[14] Buzzfeed also identified similar astroturfing campaigns for pro-choice education policies in Texas and in favor of the Keystone Pipeline, but it is hard to know exactly how pervasive this problem is.

“Although the claim that social media bots are responsible for swinging major elections is likely overblown, they are having some impact.”

The scale of AI political engagement on social media is better understood. A recent paper estimated that 12.6% of politically engaged Twitter accounts are automated bots—more or less unchanged since 2016.[15] During the 2016 presidential election, another study estimated that 20% of all political tweets came from bots.[16] Although the claim that social media bots are responsible for swinging major elections is likely overblown, they are having some impact. A working paper studying Brexit and the 2016 U.S. election found clear evidence that bots contributed to the political echo chamber and amplified polarization.[17] Still more research showed that online criticism of embattled Venezuelan President Nicolas Maduro increased following the deactivation of thousands of pro-Maduro Twitter bots.[18] Bots have also been observed furthering dangerous pseudoscience campaigns, like those against the MRR vaccine.[19]

Currently, Twitter allows bots on the website if they do not otherwise violate rules like the platform manipulation and spam policy.[20] Twitter deserves some credit for its actions against coordinated disinformation campaigns, such as those originating in Russia, Iran, Venezuela,[21] and Saudi Arabia,[22] as well as making a significant amount of its data available to researchers.[23] That their data is available is why the scope of the problem is relatively well understood, as compared to other digital platforms like YouTube, Facebook, Instagram, and WhatsApp.

Still, there is some academic evidence that Twitter could be faster in removing inhuman accounts that violate their rules.[24] Further, they currently make no effort to label bots if they do not violate Twitter’s rules—none of the digital platforms do. This despite an exchange in a 2018 Senate Intelligence Committee hearing in which both Facebook and Twitter suggested that they were going to work toward bot disclosure.[25]

The social media companies tend to argue that these problems are difficult from a technical perspective. When confronted with research about possible remedies, they often say that the research was performed in academic settings and doesn’t reflect their larger-scale applied circumstances. This is likely partially true, though not publicly demonstrated in evidence. A difference in incentives also leads to this difference of opinions—even a small number of false positives, in which a real user has an account affected by bot detection, is inimical to the platforms. Noting this, the researchers argue for a process in which human reviewers evaluate accounts flagged as suspicious by automated analyses.[26]

Given the consequences and their reticence to act, requiring that major digital platforms make a sustained and reasonable effort to enforce bot disclosure is not asking too much. A legal requirement also puts some of the onus directly on companies and political campaigns themselves, making the platforms less unilaterally responsible and possibly giving them more data to help hone their bot detection efforts for actors less likely play by the rules (see Russia).

THE BENEFITS OF TRANSPARENT AI

AI disclosure turns out to be low-hanging fruit in the typically complex bramble of technology policy. Unfortunately, it will not prevent abuse of these models in campaigns of terrorist propaganda or foreign disinformation. It can, however, reduce exposure to the deceptive political and commercial applications discussed in this paper, and there are other benefits as well.

“[I]ndividual decision-making will be improved when people know they are interacting with AI systems. They can make judgements about the advantages and limitations of the system, then choose whether to work with it or seek human help.”

For instance, individual decision-making will be improved when people know they are interacting with AI systems. They can make judgements about the advantages and limitations of the system, then choose whether to work with it or seek human help. Right now, many people might not know how to interact with modern AI systems to better reach their goals. However, this can be learned through repeated exposure.

It may take many interactions for people to comprehend what an AI system can and cannot do. The type of AI that is in operation today is called “narrow AI,” which means it may perform remarkably well at some tasks while being utterly infantile in others. It is a far cry from the “general AI” that populates our culture—think about TV and movies like “West World,” “Battlestar Galactica,” “Ex Machina,” and of course, “Blade Runner.” People may dramatically overestimate an AI’s capacity when hearing it answer their every technical question, not realizing that it cannot understand the unique context of their situation when a refund is warranted for a company’s error.

From a less regulatory perspective, AI transparency may help humans direct their sincerity primarily toward people, not robots. When a host at a restaurant is faced with a human customer standing in front of them and an AI assistant ringing the phone, there will be no ambiguity—the real person can get preferential treatment. It is a tough enough world, and we should preserve compassion for the humans that can appreciate it.
BUILDING ON THE “BLADE RUNNER” BILL

California’s state legislature has already passed the Bolstering Online Transparency (BOT) act, nicknamed the “Blade Runner” bill. This legislation requires bots operating on the internet to disclose themselves if they incentivize a commercial transaction or influence an election.[27] Notably, the legislation specifically exempts the social media companies from any responsibility. While a bill that only prevents domestic companies and campaigns from using surreptitious AI systems has sufficient merit, much of the most nefarious AI activity is targeted at the digital platforms. This is why Sen. Dianne Feinstein, D-Calif., has introduced federal legislation that would require social media companies to enforce bot disclosure requirements (in addition to banning political entities from using AI to imitate people).[28]

The U.S. Congress should pass legislation that combines these proposals, putting a burden of AI disclosure on companies and political actors while also mandating a reasonable effort to label bots by the digital platforms. It should include fines to be allocated by the Federal Trade Commission for commercial violations and Federal Election Commission for political violations; that these agencies are imperfect choices for this task can be added to the litany of reasons for considering a new technology regulatory agency. Although commercial products that come with AI systems, like Apple’s Siri and Amazon’s Alexa, are not a concern, the law’s jurisdiction should not be strictly limited to the internet. There are many circumstances that we may begin to encounter AI, and not all of them will be strictly on the web.

“[S]ince we are only now in the first decade of modern AI, we may later find ourselves especially thankful for setting a standard of transparency for reasons not apparent today.”

There’s immediate value to a broad requirement of AI-disclosure: reducing fraud, preserving human compassion, educating the public, and improving political discourse. And since we are only now in the first decade of modern AI, we may later find ourselves especially thankful for setting a standard of transparency for reasons not apparent today.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Microsoft provides support to The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative, and Amazon, Apple, Facebook, and Google provide general, unrestricted support to the Institution. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.

No comments:

Post a Comment