Pages

9 October 2022

Biden’s AI Bill of Rights Is Toothless Against Big Tech


LAST YEAR, THE White House Office of Science and Technology Policy (OSTP) announced that the US needed a bill of rights for the age of algorithms. Harms from artificial intelligence disproportionately impact marginalized communities, the office’s director and deputy director wrote in a WIRED op-ed, and so government guidance was needed to protect people against discriminatory or ineffective AI.

Today, the OSTP released the Blueprint for an AI Bill of Rights, after gathering input from companies like Microsoft and Palantir as well as AI auditing startups, human rights groups, and the general public. Its five principles state that people have a right to control how their data is used, to opt out of automated decisionmaking, to live free from ineffective or unsafe algorithms, to know when AI is making a decision about them, and to not be discriminated against by unfair algorithms.

“Technologies will come and go, but foundational liberties, rights, opportunities, and access need to be held open, and it’s the government’s job to help ensure that’s the case,” Alondra Nelson, OSTP deputy director for science and society, told WIRED. “This is the White House saying that workers, students, consumers, communities, everyone in this country should expect and demand better from our technologies.”

However, unlike the better known US Bill of Rights, which comprises the first 10 amendments to the constitution, the AI version will not have the force of law—it’s a nonbinding white paper.

The White House’s blueprint for AI rights is primarily aimed at the federal government. It will change how algorithms are used only if it steers how government agencies acquire and deploy AI technology, or helps parents, workers, policymakers, or designers ask tough questions about AI systems. It has no power over the large tech companies that arguably have the most power in shaping the deployment of machine learning and AI technology.

The document released today resembles the flood of AI ethics principles released by companies, nonprofits, democratic governments, and even the Catholic church in recent years. Their tenets are usually directionally right, using words like transparency, explainability, and trustworthy, but they lack teeth and are too vague to make a difference in people’s everyday lives.

Nelson of OSTP says the Blueprint for an AI Bill of Rights differs from past recitations of AI principles because it’s intended to be translated directly into practice. The past year of listening sessions was intended to move the project beyond vagaries, Nelson says. “We too understand that principles aren’t sufficient,” Nelson says. “This is really just a down payment. It’s just the beginning and the start.”

The OSTP received emails from about 150 people about its project and heard from about 130 additional individuals, businesses, and organizations that responded to a request for information earlier this year. The final blueprint is intended to protect people from discrimination based on race, religion, age, or any other class of people protected by law. It extends the definition of sex to include “pregnancy, childbirth, and related medical conditions,” a change made in response to concerns from the public about abortion data privacy.

Annette Zimmermann, who researches AI, justice, and moral philosophy at the University of Wisconsin-Madison, says she’s impressed with the five focal points chosen for the AI Bill of Rights, and that it has the potential to push AI policy and regulation in the right direction over time.

But she believes the blueprint shies away from acknowledging that in some cases rectifying injustice can require not using AI at all. “We can’t articulate a bill of rights without considering non-deployment, the most rights-protecting option,” she says. Zimmerman would also like to see enforceable legal frameworks that can hold people and companies accountable for designing or deploying harmful AI.

When asked why the Blueprint for an AI Bill of Rights does not include mention of bans as an option to control AI harms, a senior administration official said the focus of it is to shield people from tech that threatens their rights and opportunities, not to call for the prohibition of any type of technology.

The White House also announced actions by federal agencies today to curtail harmful AI. The Department of Health and Human Services will release a plan for reducing algorithmic discrimination in health care by the end of the year. Some algorithms used to prioritize access to care and guide individual treatments have been found to be biased against marginalized groups. The Department of Education plans to release recommendations on the use of AI for teaching or learning by early 2023.

The limited bite of the White House’s AI Bill of Rights stands in contrast to more toothy AI regulation currently under development in the European Union.

Members of the European Parliament are considering how to amend the AI Act and decide which forms of AI should require public disclosure or be banned outright. Some MEPs argue predictive policing should be forbidden because it “violates the presumption of innocence as well as human dignity.” Late last week, the EU's executive branch, the European Commission, proposed a new law that allow people treated unfairly by AI to file lawsuits in civil court.

No comments:

Post a Comment