9 May 2019

FACEBOOK IS FINDING PROBLEMS WITH ARTIFICIAL INTELLIGENCE TOO

TOM SIMONITE 

The Facebook program manager was helping test a prototype of the company’s Portal video chat device, which uses computer vision to identify and zoom in on a person speaking. But as Obamehinti, who is black, enthusiastically described her breakfast of French toast, the device ignored her and focused instead on a colleague—a white man.

The conference’s second day, headlined by Facebook’s chief technology officer Mike Schroepfer, was more sober. He, Obamehinti, and other technical leaders reflected on the challenges of using technology—particularly artificial intelligence—to safeguard or enhance the company’s products without creating new biases and problems. “There aren’t simple answers,” Schroepfer said.

Schroepfer and Zuckerberg have said that, at Facebook’s scale, AI is essential to remedy the unintended consequencesof the company digitizing human relationships. But like any disruptive technology, AI creates unpredictable consequences of its own, Facebook’s director of AI, Joaquin Candela, said late Wednesday. “It’s just impossible to foresee,” he said.

Obamehinti’s tale of algorithmic discrimination showed how Facebook has had to invent new tools and processes to fend off problems created by AI. She said being ignored by the prototype Portal spurred her to develop a new “process for inclusive AI” that has been adopted by several product development groups at Facebook.


That involved measuring racial and gender biases in the data used to create the Portal’s vision system, as well as the system’s performance. She found that women and people with darker skin were underrepresented in the training data, and that the prerelease product was less accurate at seeing those groups.

Many AI researchers have recently raised the alarm about the risk of biased AI systems as they are assigned more critical and personal roles. In 2015, Google’s photo organizing service tagged photos of some black people as “gorillas”; the company responded by blinding the product to gorillas, monkeys, and chimps.

Obamehinti said she found a less-sweeping solution for the system that had snubbed her, and managed to ameliorate the Portal’s blindspots before it shipped. She showed a chart indicating that the revised Portal recognized men and women of three different skin tones more than 90 percent of the time—Facebook’s goal for accuracy—though it still performed worse for women and the darkest skin tones.

A similar process is now used to check that Facebook’s augmented reality photo filters work equally well on all kinds of people. Although algorithms have gotten more powerful, they require careful direction. “When AI meets people,” Obamehinti said, “there’s inherent risk of marginalization.”

The company has deployed a content filtering system to identify posts that may be spreading political misinformation during India’s month-long national election. It highlights posts for human review, and operates in several of the country’s many languages. Candela said engineers have been carefully comparing the system’s accuracy among languages to ensure that Facebook’s guidelines are enforced equitably.

Similar concerns have arisen in a project testing whether Facebook could flag fake news faster by crowdsourcing the work of identifying supporting or refuting evidence to some of its users. Candela said that a team working on bias in AI and related issues has been helping work out how to ensure that the pool of volunteers who review any particular post is diverse in outlook, and not all drawn from one region or community.

Facebook’s AI experts hope some of the challenges of making their technology perform equitably will diminish as the technology becomes more powerful. Schroepfer, the company’s CTO, highlighted research that has allowed Facebook’s systems for processing images or text to achieve high accuracy with smaller amounts of training data. He didn’t share any figures indicating that Facebook has improved at flagging content that breaches its rules, though, instead repeating numbers released last November.

Candela acknowledged that AI advances, and tools developed to expose and measure shortcomings of AI systems, won’t alone fix Facebook’s problems. It requires Facebook’s engineers and leaders to do the right thing. “While tools are definitely necessary they’re not sufficient because fairness is a process,” he said.

No comments: