Pages

25 March 2019

The Promise and Perils of AI: Q&A with Douglas Yeung


Douglas Yeung is a social psychologist at RAND whose research has covered topics as diverse as social well-being, workforce diversity, and public opinion in Iran. He specializes in analyzing large volumes of social media posts for insights into human behavior. Before coming to RAND, he helped develop an app called Wertago (“the ultimate nightlife app for the ultimate night owl”) that was a grand prize winner in Google's first Android Developer Challenge.

You work on artificial intelligence. Let's start with a definition. When you say artificial intelligence, what do you mean?

It's really any kind of computing system that can augment our decisionmaking, or can even figure some things out on its own. The whole idea is to leverage the computing power that we have today to help make decisions, to help spot patterns and trends that we couldn't otherwise detect.

You've looked recently at the question of bias in AI. What's the concern there?


Any technology—or, really, anything that we build—reflects the values, the norms, and, of course, the biases of its creators. We know that the people who build AI systems today are predominantly male, white and Asian, and a lot of the innovations come out of the United States. People have expressed concern that this could potentially introduce bias. It's of concern because AI, by its very definition, can have broader impact. We should be asking, What might be the unintended consequence of bias?

What's an example of how that plays out?

An algorithm was used in prison sentencing. It was trying to estimate the likelihood that someone would commit crimes if they were released. A news investigation uncovered that this algorithm was much more likely to predict that African-American prisoners were more likely to commit a crime post-reentry than were other released prisoners.


If we can make people more aware of the potential for bias in artificial intelligence, then at least there are more eyes on the problem.Share on Twitter

You recently warned about the danger of intentional bias. Why would someone want to introduce bias into an AI?

For all the same reasons that they hack systems, that they troll, that they commit crimes for profit or engage in other illicit activities. You can imagine how this could become yet another front in the competitive battles that private companies fight against each other all the time. Also, it's been reported that attackers have sought to sow conflict in our society and democracy by enflaming racial tensions. It seems plausible that someone could intentionally introduce bias to cause some similar outrage.

How big of a concern is that?

Messing with data is not new. Our defense and intelligence communities have been warning about the possibility of someone injecting bad or false data into datasets for a while now. This would, in some ways, be just a different angle on that. But another reason it's a concern is that it could be really hard to detect and really hard to attribute to anyone. That makes it difficult to deter attackers or hold them accountable.

That's why people really need to be aware that bias is a potential problem whenever artificial intelligence is being used to make decisions. If we can make people more aware of the potential for bias, then at least there are more eyes on the problem. It makes it that much less likely that someone could slip something in undetected.

You're a social psychologist. How did you come to be looking at this?

I'm interested generally in the societal impact of technology. These technologies are designed to essentially make our lives better. I've just been interested in how we use them, and then how they shape how we behave from then on—what they can then reflect back on ourselves.

No comments:

Post a Comment