Pages

30 April 2023

Challenges of Implementing AI With “Democratic Values”: Lessons From Algorithmic Transparency

Matt O'Shaughnessy

From major policy addresses and influential strategy documents to joint statements with key partners, a major stated U.S. policy goal has been to develop “rules and norms” that ensure technologies such as artificial intelligence (AI) are “developed and used in ways that reflect our democratic values and interests.” Unfortunately, this is much easier said than done.

A closer look at one of the most accepted norms for AI systems—algorithmic transparency— demonstrates the challenges inherent in incorporating democratic values into technology.

Like other norms and principles for AI governance, efforts to make the inner workings of algorithms more transparent provide utility to policymakers and researchers alike. More detailed information about how AI systems are used can enable better evidence-based policy for algorithms, help users understand when algorithmic systems are reliable, and expose developers’ thorny design trade-offs to meaningful debate. Calling for transparency is an easy and noncontroversial step for policymakers—and one that does not require deep engagement with the technical details of AI systems. But it also avoids the more difficult and value-laden questions of what algorithms should do and how complex trade-offs should be made in their design.

Take recent rules in both the U.S. and China requiring descriptions of when and how certain AI systems are employed. Both serve the important goal of informing policymakers about how algorithms are being used, enabling future AI-related policies to be better grounded in evidence. However, the two inventory efforts are accessible to different groups and achieve very different ends. The United States’ (partially implemented) inventory covers only federal government use of AI but is open to public inspection. Though this scheme ignores ways in which private-sector AI use can violate democratic principles, it enables broad public scrutiny of the algorithms it does describe. Meanwhile, many details of China’s “algorithm registry” are intended only for government consumption, and, like its recent draft rules for generative AI, it will likely function to increase state power over technology companies.

All of this is to say, policymakers working to operationalize principles such as transparency in ways that in fact align AI systems with democratic values should focus on two considerations: “for whom” and “to what end.”

Transparency for Whom?

First and foremost, who should use the information released by transparency measures? Consider three examples of AI transparency efforts that would result in varying degrees of public accessibility (and, consequently, varying degrees of external scrutiny).

Close to the least accessible end of the spectrum are internally facing transparency measures for AI developers. We see this type of measure detailed in guidance from U.S. financial regulators on the use of algorithms, which recommends corporate structures that give risk management officers and boards more insight into design decisions made by engineers, but without any possibility for stakeholders outside of the financial institution to suggest or force changes.

By contrast, some efforts make such information broadly available to the public. On the opposite end of the accessibility spectrum, for example, is a recent state law in Idaho mandating that “[a]ll documents, data, records, and information” about algorithms used in certain parts of the criminal justice system be “open to public inspection, auditing, and testing.” Transparency measures like the Idaho law not only require extensive disclosure but also place no restrictions on the actors who can access these disclosures. This opens the disclosures to broad scrutiny.

Measures like the European Union’s proposed AI Liability Directive occupy a middle ground. The directive would allow courts to require information about AI systems to be disclosed during legal proceedings when the complexity and opaqueness of an algorithm would place an unreasonable burden on plaintiffs. These types of transparency measures do not mandate full or broad disclosure but, rather, tailor requirements to specific legal needs.

One key factor differentiating these transparency policies, beyond the specific technical types of information disclosed, is who has access to these disclosures. The financial guidance in which transparency is intended only for internal corporate use, for instance, positions developers as judge and jury. One could speculate that this leads to a less-than-ideal incentive structure: Problems surfaced by internal transparency processes are less likely to be resolved when this would come into tension with a company’s financial interests. Europe’s proposed AI Liability Directive, by contrast, provides transparency to precisely the parties that may have been harmed by an AI system, enabling a more fulsome sense of accountability.

Another important dimension to the accessibility of algorithmic transparency measures is whether the actors who have access to released information have the capacity to translate it into meaningful protections for individuals. AI systems typically involve two components: the algorithm itself (which may be described by thousands of mathematical expressions or lines of code), and the data it was trained on (which could be incredibly vast). An engineer can draw on parts of this information to develop and test hypotheses about the types of patterns AI systems are extracting from data, akin to a doctor using an array of (sometimes contradictory) test results to make a tentative diagnosis. But members of the public, like patients attempting to interpret inscrutable test results, lack the resources and know-how to turn complex mathematical details of algorithms and data sets into answers as to whether or how an algorithm might have produced an unfair impact.

Releasing technical details of AI systems without simultaneously enhancing the capacity of the public and civil society to use them is not unlike airplane manufacturers releasing detailed blueprints of aircraft, and then expecting travelers to individually assess them for flightworthiness. Broad algorithmic transparency requirements will require well-resourced institutions of accountability in order to translate newly available information into concrete protections that affirm democratic values.

Without equipping domain-specific government and civil society watchdogs with the technical and regulatory capacity to protect the public interest, the burden of determining whether and how algorithms produce harm will remain squarely on already-overburdened individuals. In Idaho, for example, there are few signs that the overburdened civil society organizations that could hypothetically take advantage of the algorithmic transparency law have the resources to pursue it. As a result, only the individuals with the most resources will be able to benefit from new transparency regulations, undermining the theoretical value that algorithmic transparency can provide to liberal and egalitarian conceptions of democracy.

Transparency to What End?

Second, to what end should transparency measures be deployed to serve? One common democratic objective for AI systems is that they satisfy core tenets of liberal democracy, such as predictable laws and due process rights. These liberal ideals can be violated by algorithmic systems making predictions about individuals’ behavior based on data from others’ past actions, robbing individuals of agency. Algorithms also threaten the democratic ideals of equal protection of rights, for instance, by using data that can act as proxies for traits such as race and gender in ways that risk perpetuating historical biases in socially important settings such as tax auditing, insurance underwriting, or predicting criminal recidivism.

Transparency is often a necessary component of promoting these ideals, but transparency alone is insufficient to ensure that actors developing and deploying AI systems adhere to certain values. To better affirm liberal and egalitarian democratic values, policymakers should reach beyond the technical toolbox and bolster transparency measures with funding for research on topics such as algorithmic fairness, as well as increased resources for the institutions of accountability that monitor the impacts of AI systems on individuals.

One approach policymakers can take is to increase the financial resources and technical know-how of government bodies that can translate algorithmic transparency into individual benefit. Recent legislation creating a centralized body for AI government expertise and a bureaucratic mechanism to track AI-related talent in government is a much-needed first step, but rather than simply boosting the government’s use of AI, policymakers could concentrate funding on the regulators, consumer protection agencies, and civil rights offices that mitigate the negative impacts of AI systems. Dedicated resources provided by the Federal Trade Commission’s new Office of Technology, for example, could translate algorithmic transparency into values-affirming protections for individuals and might serve as a model for other consumer protection and civil rights offices.

Strategies should further compensate for ways in which the complexity of algorithms may tilt the playing field against those with fewer resources. The EU’s proposed AI Liability Directive, for instance, would empower those impacted by AI systems by reducing the burden of proof in litigation involving some particularly complex algorithms.

Another important objective for AI systems is that they support electoral and deliberative democracy, enabling and promoting a fair electoral process and deepening the public’s ability to understand and debate the proper role of algorithms in society. Transparency in key parts of the electoral process, such as for the algorithms used to draw voting district lines, indeed enables scrutiny and oversight. But here, too, algorithmic transparency is necessary but not sufficient. The more fundamental conditions needed for democratic deliberation about AI systems—trust in institutions, an engaged society, and respectful debate—will be far harder to achieve than transparency.

Strengthening these factors requires investments in democratic institutions. Rather than considering AI transparency measures in isolation, lawmakers could broaden their focus to policies that support local media and other institutions of accountability. Broader stakeholder outreach by both developers and the government would allow impacted individuals to have a greater say in how AI is used. Efforts to create a more diverse AI workforce and educate the general public about AI, such as those assigned to the National Science Foundation in the National AI Initiatitive Act of 2020, would organically enable more meaningful oversight of algorithms. These types of efforts might seem staid, and they offer few immediate and flashy signs of progress for policymakers to point to. But more substantial investments in these types of democratic principles provide the other necessary conditions that, when paired with algorithmic transparency, enable AI systems to meaningfully serve democratic values.

It’s Not Just Transparency

Developing normative AI principles like transparency faces a fundamental tension. To be accepted and adopted, such principles must be flexible enough to fit diverse political and business models. But principles that resonate in completely different ways within various political-ideological systems simply provide rhetorical legitimacy, entrenching diverging values rather than truly pushing technology to serve democratic goals.

Indeed, lurking just below the smooth rhetorical surface of bipartisan and transatlantic commitments to shared “democratic values” lie deep political and philosophical divisions over which aspects of liberal, egalitarian, electoral, or deliberative democracy policymakers seek to further. One influential AI strategy document’s conception of “democracy” as “limited government and individual liberty,” for instance, can lead to very different policy prescriptions than conceptions emphasizing “equal opportunities” and “equitable access.” By orienting discussions around the questions of “for whom” and “to what end,” policymakers can operationalize the answers in ways that in fact affirm their values.

Transparency is far from the only technical norm put forth for AI systems whose ultimate impact on democratic values depends on the political-ideological systems in which it is realized. The principle of accountability raises questions of who, exactly, should have responsibility for overseeing AI systems and what types of trade-offs they should be empowered to make. Common principles that AI systems be “human-centered,” “inclusive,” or “free from bias” invoke complex assumptions specific to individual regions, cultures, and political systems that must be carefully unpacked.

The difficulties of ensuring that the principle of transparency affirms democratic values suggest a broader challenge for the endeavor of building technology that furthers democracy. Norms that reach only to the technical toolbox will be insufficient. Policymakers must also invest in a capable and trusted government, civil society, and public.

No comments:

Post a Comment