6 November 2021

Facebook Drops Facial Recognition to Tag People in Photos


FACEBOOK ON TUESDAY said it would stop using facial recognition technology to identify people in photos and videos and delete accompanying data on more than 1 billion people.

The news marks the end of one of the largest known facial recognition systems. Outside of face unlock for smartphones and applications in airports, Facebook’s auto tag is perhaps the most common form of facial recognition technology people encounter. In a blog post, Facebook VP of artificial intelligence Jerome Pesenti said the decision reflected a “need to weigh the positive use cases for facial recognition against growing societal concerns.”

Facebook has used a facial recognition system to automatically detect people in photos, videos, and Memories since 2010, drawing criticism from privacy advocates and incurring hundreds and millions of dollars in fines from government regulators. A Facebook spokesperson told WIRED that billions of photos tagged with the assistance of facial recognition over the span of the past decade will keep those labels. Cues and signals about a person’s social circle that have been gathered from photos and videos using facial recognition will also presumably remain intact.

Facial recognition has come to embody privacy and human rights concerns that have led more than a dozen major US cities to ban use of the technology. Applications of facial recognition by law enforcement have led to multiple wrongful arrests in the US and aided in the creation of a surveillance state to control Muslim minority groups in China.

The decision comes after Facebook underwent weeks of intense scrutiny following the leak of thousands of internal documents that revealed holes in moderation. It also comes on the heels of Facebook’s move last week to rebrand itself under the name Meta.

The end of facial recognition for photo tags doesn’t mean the company is outright ending facial recognition use. Facebook will still use the technology to do things like help users gain access to a locked account or verify their identity in order to complete a transaction. And although Facebook will delete data on more than a billion faces, the company will retain DeepFace, the AI model trained with that data. About one in three Facebook users today uses its service that recommends people to tag in a photo.

In addition to eliminating automatic photo tags, Facebook will no longer use facial recognition to identify people by name in a small percentage of photos for people who are blind or visually impaired.

Facebook is the latest major tech company to set aside facial recognition use. IBM stopped offering facial recognition to customers last year. Citing a lack of action by regulators, in the wake of the murder of George Floyd, Amazon and Microsoft paused their sales of facial recognition services.

“If this prompts a policymaker to take the conversation about facial recognition seriously enough, then this would become a turning point.”

DEB RAJI, AI RESEARCHER

Researchers like Joy Buolamwini, Deb Raji, and Timnit Gebru first documented that facial recognition systems perform less accurately on women with dark skin. Those results were subsequently confirmed by a National Institute of Standards and Technology analysis that also found the tech often misidentifies Asian people, young people, and others.

Raji, who has worked on policy and ethics issues for organizations including AI Now, Google, and the Algorithmic Justice League, called Facebook’s move significant because DeepFace played a notable role in the history of computer vision. The deep learning model was created in 2014 with 4 million images from 4,000 people, the largest dataset of people’s faces to date. DeepFace was the first AI model to achieve human-level performance in facial recognition, and it sparked a trend of commercializing the technology and hoarding face data to power performance improvements.

Raji said it’s always good when companies take public steps to signal that technology is dangerous but cautioned that people shouldn’t have to rely on voluntary corporate actions for protection. Whether Facebook's decision to limit facial recognition use makes a larger difference will depend on policymakers.

“If this prompts a policymaker to take the conversation about facial recognition seriously enough to actually pull some legislation through Congress and really advocate for and lean into it, then this would become a turning point or a critical moment,” she says.

Despite occasionally bipartisan rhetoric about the threat facial recognition poses to civil liberties and a lack of standards in use by law enforcement, Congress has not passed any laws regulating use of the technology or setting standards for how businesses or governments can use facial recognition.

In a statement shared with WIRED, the group Fight for the Future said Facebook knows facial recognition is dangerous and renewed calls to ban use of the technology.

“Even as algorithms improve, facial recognition will only be more dangerous,” the group says. “This technology will enable authoritarian governments to target and crack down on religious minorities and political dissent; it will automate the funneling of people into prisons without making us safer; it will create new tools for stalking, abuse, and identity theft.”

Sneha Revanur, founder of Encode Justice, a group for young people seeking an end to the use of algorithms that automate oppression, said in a statement that the news represents a hard-earned victory for privacy and racial justice advocates and youth organizers. She said it’s one reform out of many needed to address hate speech, misinformation, and surveillance enabled by social media companies.

Luke Stark is an assistant professor at the University of Western Ontario and a longtime critic of facial recognition. He’s called facial recognition and computer vision pseudoscience with implications for biometric data privacy, anti-discrimination law, and civil liberties. In 2019, he argued that facial recognition is the plutonium of AI.

Stark said he thinks Facebook’s action amounts to a PR tactic and a deflection meant to grab good headlines, not a core change in philosophy. But he said the move also shows a company that doesn’t want to be associated with toxic technology.

He connected the decision to Facebook’s recent focus on virtual reality and the metaverse. Powering personalized avatars will require collecting other kinds of physiological data and invite new privacy concerns, he said. Stark also questioned the impact of scrapping the facial recognition database because he doesn’t know anybody younger than 45 who posts photos on Facebook.

Facebook characterized its decision as “one of the largest shifts in facial recognition usage in the technology’s history.” But Stark predicts “the actual impact is going to be quite minor” because Facebook hasn’t completely abandoned facial recognition and others still use it.

“I think it can be a turning point if people who are concerned about these technologies continue to press the conversation,” he says.

No comments: