Pages

15 December 2018

MICROSOFT WANTS TO STOP AI'S 'RACE TO THE BOTTOM'


AFTER A HELLISH year of tech scandals, even government-averse executives have started professing their openness to legislation. But Microsoft president Brad Smith took it one step further on Thursday, asking governments to regulate the use of facial-recognition technology to ensure it does not invade personal privacy or become a tool for discrimination or surveillance.

Tech companies are often forced to choose between social responsibility and profits, but the consequences of facial recognition are too dire for business as usual, Smith said. “We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition,” he said in a speech at the Brookings Institution. “We must ensure that the year 2024 doesn’t look like a page from the novel 1984.”


To address bias, Smith said legislation should require companies to provide documentation about what their technology can and can’t do in terms customers and consumers can understand. He also said laws should require “meaningful human review of facial recognition results prior to making final decisions” for “consequential” uses, such as decisions that could cause bodily or emotional harm or impinge on privacy or fundamental rights. As another measure to protect privacy, Smith said that if facial recognition is used to identify consumers, the law should mandate “conspicuous notice that clearly conveys that these services are being used.”

Smith also said lawmakers should extend requirements for search warrants to the use of facial-recognition technology. He noted a June decision by the US Supreme Court requiring authorities to obtain a search warrant to get cellphone records showing a user’s location. “Do our faces deserve the same protection as our phones?” he asked. “From our perspective, the answer is a resounding yes.”

"The only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition."

MICROSOFT PRESIDENT BRAD SMITH

Smith said companies and governments using facial recognition should be transparent about their technology, including subjecting it to review by outsiders. “As a society, we need legislation that will put impartial testing groups like Consumer Reports and their counterparts in a position where they can test facial recognition services for accuracy and unfair bias in an accurate and even-handed manner,” Smith said.

Smith’s speech Thursday echoed a call for regulation facial-recognition technology that he first made in July, but offered new specifics. He listed six principles that he said should guide use and regulation of facial recognition: fairness, transparency, accountability, non-discrimination, notice and consent, and lawful surveillance. He said Microsoft next week would publish a document with suggestions on implementing these principles.

As governments and companies increasingly deploy facial recognition technology in areas like criminal justice or banking, both critics and tech workers have raised concerns. Amazon Rekognition, the company’s facial recognition technology, is used by police in Orlando, Florida. The ACLU tested Amazon’s tool and found out that it falsely identifiedmembers of Congress.

Also Thursday, the research institute AI Now issued a new report stressing the urgency for companies to open their algorithms to auditing. “AI companies should waive trade secrecy and other legal claims that would prevent algorithmic accountability in the public sector,” the report says. “Governments and public institutions must be able to understand and explain how and why decisions are made, particularly when people’s access to healthcare, housing, welfare, and employment is on the line.”

AI Now cofounders Kate Crawford and Meredith Whittaker said that their focus on trade secrecy emerged from a symposium held earlier this year with leading legal experts, “who are currently suing algorithms, if you will,” said Crawford. “It was extraordinary to hear dozens of lawyers sharing stories about how hard it is to find basic information.”

Their report also discussed the use of affect analysis, where facial recognition technology can be used to detect emotion. The University of St. Thomas in Minnesota is already using a system based on Microsoft’s tools to observe students in the classroom using a webcam. The system predicts emotions and sends a report to the teacher. AI Now says this raises questions around technology’s ability to grasp complex emotional states, a student’s ability to contest the findings, and the way it could impact what is taught in the classroom. Then there’s the privacy concerns, particularly given that “no decision has been made to inform the students that the system is being used on them.”

Microsoft declined comment about the university’s system. 1

New York University business school professor Michael Posner, director of the Center for Business and Human Rights at the School, who is familiar with Microsoft’s proposed framework, has been working with tech companies around fake news and Russian interference. In his experience, companies have been reluctant to engage with both governments and consumers. “They don’t like government involvement, in any sense, regulating what they do. They have not been as forthcoming with disclosures, and too reticent to give people a heads up on what’s transpiring. They’ve also been very reluctant to work with one another,” Posner says.

Still, he’s hopeful that leadership from a “more mature” company like Microsoft could encourage a more open approach. Amazon did not respond to questions about its guidelines for facial recognition.

No comments:

Post a Comment